PAPERS ADDED IN THE PERIOD 1996-1999 (APPROXIMATELY)
PAPERS 1996 -- 1999 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/96-99.html
Maintained by Aaron Sloman.
It contains an index to files in the Cognition and Affect
Project's Web directory produced or published in the period
1996-1999. Some of the papers published in this period were produced
before 1996 and are included in the list for an earlier period
http://www.cs.bham.ac.uk/research/cogaff/81-95.html
Last updated: 2 May 2010; 13 Nov 2010; 3 Jan 2012; 7 Jul 2012; 31 Aug 2013; 14
Aug 2014;
PDF versions of postscript files can be provided on request. Email A.Sloman@cs.bham.ac.uk requesting conversion.
JUMP TO DETAILED LIST (After contents)
Title: Architectures and types of consciousness (TUCSON3 Abstract)
Author: Aaron Sloman
Title: Distributed Reflective Architectures for Adjustable Autonomy
Author: C. Kennedy
Title: Evolution of Self-Definition
Author: C. Kennedy
Title: PhD Thesis Proposal: Distributed Reflective Architectures
Author: C. Kennedy
Title: Patrice Terrier interviews Aaron Sloman for EACE QUARTERLY (August 1999)
Title: Beyond Shallow Models of Emotion
(Originally presented at
I3 Spring Days Workshop
on Behavior planning for life-like
characters and avatars Sitges, Spain, March 1999)
Author: Aaron Sloman
Title: Architecture-Based Conceptions Of Mind (Superseded version)
(Abstract for invited talk at
11th International Congress of Logic, Methodology and Philosophy of Science,
Krakow, Poland
August 20-26, 1999.)
Author: Aaron Sloman
Title: Why can't a goldfish long for its mother?
Architectural prerequisites for various types of emotions.
(Slides for invited talk at Conference on Affective Computing, April
1999, UCL.)
Author: Aaron Sloman
Title: Building cognitively rich agents using the SIM_AGENT
toolkit (in CACM March 1999),
Authors: Aaron Sloman and Brian Logan
Title: Architectural Requirements for Human-like Agents Both Natural and
Artificial (What sorts of machines can love? )
Authors: Aaron Sloman
Title: Towards a Grammar of Emotions (Now in a different file.)
Author: Aaron Sloman
Title: Are brains computers? (Slides for debate at LSE)
Authors: Aaron Sloman
Title: State space search with prioritised soft constraints
Authors: Brian Logan and Natasha Alechina
Title: A* (Astar) with bounded costs
Authors: Brian Logan and Natasha Alechina
Title: Qualitative Decision Support using Prioritised Soft Constraints
Authors: Brian Logan and Aaron Sloman
Title: SIM_AGENT two years on
Authors: B. Logan J. Baxter, R. Hepplewhite and A. Sloman
Title: What sorts of brains can support what sorts of minds?
Authors: Aaron Sloman
Title: Review of Affective Computing by Rosalind Picard, MIT
Authors: Aaron Sloman
Title: Slides for presentation on: What's an AI toolkit for?
Authors: Aaron Sloman
Title: Diagrams in the Mind?
Authors: Aaron Sloman
Title: The "Semantics" of Evolution: Trajectories and Trade-offs
Authors: Aaron Sloman
Title: Damasio, Descartes, Alarms and Meta-management
Authors: Aaron Sloman
Title: Classifying Agent Systems
Authors: Brian Logan
Title: What's an AI toolkit for?
Authors: Aaron Sloman
Title: The evolution of what?
Authors: Aaron Sloman
Title: Architectures and Tools for Human-Like Agents
Authors: Aaron Sloman and Brian Logan
Title: WHAT SORTS OF MACHINES CAN LOVE?
Authors: Aaron Sloman
Title: Cognition and affect: Architectures and tools
Authors: Brian Logan and Aaron Sloman
Title: Supervenience and Implementation: Virtual and Physical Machines
Authors: Aaron Sloman
Title: Design Spaces, Niche Spaces and the "Hard" Problem
Authors: Aaron Sloman
Title: The evolutionary engine and the mind machine:
A design-based study of adaptive change
Authors: Chris Complin
Title: Agent route planning in complex terrains
Authors: Brian Logan and Aaron Sloman
Title: Route planning with ordered constraints
Authors: Brian Logan
Title: Route planning in the space of complete plans
Authors: Brian Logan and Riccardo Poli
Title: Route planning with GA* (GAstar)
Authors: Brian Logan and Riccardo Poli
Title: Emotional Agents (PhD Thesis)
Authors: Ian Wright
Title: What sort of architecture is required for a human-like agent?
Authors: Aaron Sloman
Title: Designing Human-Like Minds
Authors: Aaron Sloman
Title: Architectural Requirements for Autonomous Human-like Agents
Authors: Aaron Sloman
Title: Synthetic Minds
Authors: Aaron Sloman and Brian Logan
Title: MINDER1: An implementation of a protoemotional agent architecture
Authors: Ian Wright, Aaron Sloman
Title: The society of mind requires an economy of mind
Authors: Ian Wright, Michel Aube
Title: Actual Possibilities
Authors: Aaron Sloman
Title: What sort of architecture can support emotionality?
Authors: Aaron Sloman
Title: Evolving Optimal Populations with XCS Classifier Systems
Authors: Tim Kovacs
Title: Reactive and Motivational Agents: Towards a Collective Minder
Author: Darryl Davis
Title: What sort of architecture is required for a human-like agent?
Authors: Aaron Sloman
Title: Route planning in the space of complete plans
Authors: Brian Logan and Riccardo Poli
Title: On the relations between search and evolutionary algorithms
Authors: Brian Logan and Riccardo Poli
Title: Design Requirements for a Computational Libidinal Economy
Authors: Ian Wright
Title: A systems approach to consciousness
Authors: Aaron Sloman
Title: Reinforcement learning and animat emotions
Authors: Ian Wright
Title: What is it like to be a Rock? (DRAFT)
Authors: Aaron Sloman
Title: What sort of control system is able to have a personality?
Authors: Aaron Sloman
Title: SIM_AGENT: A toolkit for exploring agent designs
Authors: Aaron Sloman and Riccardo Poli
Title: Towards a Design-Based Analysis of Emotional Episodes
Authors: Ian Wright, Aaron Sloman, Luc Beaudoin
Title: Beyond Turing Equivalence
Authors: Aaron Sloman
Filename: sloman-tucson3.txt
Title: Architectures and types of consciousness (TUCSON3 Abstract)
Author: Aaron Sloman
Date Installed: 15 Jan 2007 (Published 1998)
Abstract:
This abstract was included in the 'Philosophy' section of the proceedings of this conference: Toward a Science of Consciousness 1998 "Tucson III" April 27-May 2, 1998 Tucson, Arizona All the abstracts are online here.
Abstract:
A decision made by an autonomous system to adjust its autonomy status (e.g.
override manual control) must be based on
reliable information. In particular, the system's anomaly-detection mechanisms
must be intact. To ensure this, a high degree
of self-monitoring (reflective
coverage) is necessary. We propose a distributed reflective system, where the
participating agents monitor each other's performance and software execution
patterns. We focus on two things: monitoring of the anomaly-detection
components of an agent (which we call meta-observation) and evaluating the
"quality" of the agent's actions (does it make the world better or worse?).
Using a simple scenario, we argue that these features can
enhance the reliability of autonomy adjustment.
Abstract:
When considering an architecture for an artificial immune system, it is
generally agreed that discrimination between self and non-self is required.
With current immune system models, the definition of "self" is usually
concerned with patterns associated with normal usage. However, this has the
disadvantage that the discrimination process itself may be disabled by a virus
and there is no way to detect this because the algorithms controlling the
pattern recognition are not included in the self-definition. To avoid an
infinite regress of increasingly higher levels of reflection, we propose a
model of mutual reflection based on a multi-agent network where each agent
monitors and protects a subset of other agents and is itself monitored and
protected by them. The whole network is then the self-definition. The paper
presents a conceptual framework for the evolution of algorithms to enable
agents in the network to become mutually protective. If there is no critical
dependence on a global management component, this property of symbiosis can
lead to a more robust form of distributed self-nonself distinction.
Abstract:
The autonomy of a system can be defined as its capability to recover
from unforeseen difficulties without any user intervention.
This thesis proposal addresses a small part of this problem, namely the
detection of anomalies within a system's own operation by the system
itself. It is a response to a challenge presented by immune systems
which can distinguish between "self " and "nonself ", i.e. they can
recognise a "foreign" pattern (due to a virus or bacterium) as different
from those associated with the organism itself, even if the pattern was
not previously encountered. The aim is to apply this requirement to an
artificial system, where "nonself " may be any form of deliberate
intrusion or random anomalous behaviour due to a fault. When designing
reflective architectures or self-diagnostic systems,
it is simpler to rely on a single coordination mechanism to make the
system work as intended. However, such a coordination mechanism cannot
be inspected or repaired by the system itself, which means that there is
a gap in its reflective coverage. To try to overcome this limitation,
this thesis proposal suggests a conceptual frame-work based on a network
of agents where each agent monitors the whole network from a unique and
independent perspective and where the perspectives are not globally
"managed". Each agent monitors the fault-detection capability and
control algorithms of other agents (a process called meta-observation).
In this way, the agents can collectively achieve reflective coverage of
failures.
Filename: Sloman.eace-interview.html
Title: Patrice Terrier interviews Aaron Sloman
for EACE QUARTERLY
(August 1999)
Date: 3 Sep 1999
Abstract:
Patrice Terrier asks and Aaron Sloman attempts to answer questions about
AI, about emotions, about the relevance of philosophy
to AI, about Poplog, Sim_agent and other tools.
(EACE =
European Association for Cognitive Ergonomics
This paper has been superseded by a longer revised version with the same name in Cognitive Processing, Vol 1, 2001, pp 1-22, (Summer 2001), available here.Author: Aaron Sloman(Originally presented at I3 Spring Days Workshop on Behavior planning for life-like characters and avatars Sitges, Spain, March 1999)
Abstract:
There is much shallow thinking about emotions, and a huge diversity of definitions of "emotion" arises out of this shallowness. Too often the definitions and theories are inspired either by a mixture of introspection and selective common sense, or by a misdirected neo-behaviourist methodology, attempting to define emotions and other mental states in terms of observables. One way to avoid such shallowness, and perhaps achieve convergence, is to base concepts and theories on an information processing architecture, which is subject to various constraints, including evolvability, implementability, coping with resource-limited physical mechanisms, and achieving required functionality. Within such an architecture-based theory we can distinguish primary emotions, secondary emotions, and tertiary emotions, and produce a coherent theory which not only explains a wide range of phenomena but also partly explains the diversity of theories: most of them focus on only a subset of types of emotions.
NOTE: The link now points to the final, published version of the paper. http://www.cs.bham.ac.uk/research/projects/cogaff/00-02.html#lmpsfinal
Author: Aaron Sloman
Date: 8 Jun 1999
Abstract: (This was a short abstract. See later version)
Because we apparently have direct access to the phenomena, it is
tempting to think we know exactly what we are talking about when we
refer to consciousness, experience, the "first-person" viewpoint, etc.
But this is as mistaken as thinking we fully understand what
simultaneity is just because we have direct access to the phenomena, for
instance when we see a flash and hear a bang simultaneously.
Einstein taught us otherwise. From the fact that we can recognise some instances of a concept it does not follow that we know what is meant in general by saying that something is or is not an instance. Endless debates about which animals and which types of machines have consciousness are among the many symptoms that our concepts of mentality are more confused than we realise.
Too often people thinking about mind and consciousness consider only adult human minds in an academic culture, ignoring people from other cultures, infants, people with brain damage or disease, insects, birds, chimpanzees and other animals, as well as robots and software agents in synthetic environments. By broadening our view, we find evidence for diverse information processing architectures, each supporting and explaining a specific combination of mental capabilities.
When concepts connote complex, clusters of capabilities, then different subsets may be present at different stages of development of a species or an individual. Very different subsets may be found in different species. Different subsets may be impaired by different sorts of brain damage or degeneration. When we know what sorts of components are implicitly referred to by our pre-theoretic "cluster concepts" we can then define new more precise concepts in terms of different subsets. It helps if we can specify the architectures which generate different subsets of information processing capabilities. That also enables us to ask new, deeper, questions not only about the development of individuals but about the evolution of mentality in different species.
Architecture-based concepts generated in the framework of virtual machine functionalism subvert familiar philosophical thought experiments about zombies, since attempts to specify a zombie with the {\sc} right kind of {\em virtual machine} functionality but lacking our mental states degenerates into incoherence when spelled out in great detail. When you have fully described the internal states, processes, dispositions and causal interactions within a zombie whose information processing functions are alleged to be {\em exactly} like ours, the claim that something might still be missing becomes incomprehensible.
Authors: Aaron Sloman
Date: 11 Apr 1999
Abstract:
(Intended as a partial antidote to wide-spread shallow views about
emotions, and over-simplified ontologies too easily accepted by AI and
HCI researchers now becoming interested in intelligence and affect.)
Our everyday attributions of emotions, moods, attitudes, desires, and other affective states implicitly presuppose that people are information processors. To long for something you need to know of its existence, its remoteness, and the possibility of being together again. Besides these semantic information states, longing also involves a control state. One who has deep longing for X does not merely occasionally think it would be wonderful to be with X. In deep longing thoughts are often uncontrollably drawn to X.
We need to understand the architectural underpinnings of control of attention, so that we can see how control can be lost. Having control requires being able to some extent to monitor one's thought processes, to evaluate them, and to redirect them. Only "to some extent" because both access and control are partial. We need to explain why. (In addition, self-evaluation can be misguided, e.g. after religious indoctrination!)
"Tertiary emotions" like deep longing are different from "primary" emotions (e.g. being startled or sexually aroused) and "secondary emotions" (e.g. being apprehensive or relieved) which, to some extent, we share with other animals. Can chimps, bonobos or human toddlers have tertiary emotions? To clarify the empirical questions and explain the phenomena we need a good model of the information processing architecture.
Conjecture: various modules in the human mind (perceptual, motor, and more central modules) all have architectural layers that evolved at different times and support different kinds of functionality, including reactive, deliberative and self-monitoring processes.
Different types of affect are related to the functioning of these different layers: e.g. primary emotions require only reactive layers, secondary emotions require deliberative layers (including "what if" reasoning mechanisms) and tertiary emotions (e.g. deep longing, humiliation, infatuation) involve additional self evaluation and self control mechanisms which evolved late and may be rare among animals.
An architecture-based framework can bring some order into the morass of studies of affect (e.g. myriad definitions of "emotion"). This will help us understand which kinds of emotions can arise in software agents that lack the reactive mechanisms required for controlling a physical body.
HCI Designers need to understand these issues (a) if they want to model human affective processes, (b) if they wish to design systems which engage fruitfully with human affective processes, (c) if they wish to produce teaching/training packages for would-be counsellors, psychotherapists, psychologists.
Abstract:
An overview of some of the motivation of our research and design criteria for the SIM_AGENT toolkit for a special issue of CACM on multi-agent systems, edited by Anupam Joshi and Munindar Singh.For more information about the toolkit (now referred to as SimAgent), including movies of demos, see http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
Work on the Cognition and Affect project using the toolkit is reported here (PDF).
Filename: Sloman-kd-love.pdf
Title: Architectural Requirements for Human-like Agents Both Natural and
Artificial.
(What sorts of machines can love? )
To appear in
Human Cognition And Social Agent Technology
Ed. Kerstin Dautenhahn, in the
"Advances in Consciousness Research" series, John Benjamins Publishing
Extended version of slides on love for
"Voice box" talk, presented in London (below)
Authors: Aaron Sloman
Date: 10 Jan 1999 (Book Published, March 2000)
Abstract:
This paper, an expanded version of a talk on love given to a literary
society, attempts to analyse some of the architectural requirements for
an agent which is capable of having primary, secondary and tertiary
emotions, including being infatuated or in love. It elaborates on work
done previously in the Birmingham Cognition and Affect group, describing
our proposed three level architecture (with reactive, deliberative and
meta-management layers), showing how different sorts of emotions relate
to those layers.
Some of the relationships between emotional states involving partial loss of control of attention (e.g. emotional states involved in being in love) and other states which involve dispositions (e.g. attitudes such as loving) are discussed and related to the architecture.
The work of poets and playwrights can be shown to involve an implicit commitment to the hypothesis that minds are (at least) information processing engines. Besides loving, many other familiar states and processes such as seeing, deciding, wondering whether, hoping, regretting, enjoying, disliking, learning, planning and acting all involve various sorts of information processing.
By analysing the requirements for such processes to occur, and relating them to our evolutionary history and what is known about animal brains, and comparing this with what is being learnt from work on artificial minds in artificial intelligence, we can begin to formulate new and deeper theories about how minds work, including how we come to think about qualia, many forms of learning and development, and results of brain damage or abnormality.
But there is much prejudice that gets in the way of such theorising, and also much misunderstanding because people construe notions of "information processing" too narrowly.
Abstract: A discussion of some of the commonalities between brains and computers as physical systems within which information processing machines can be implemented. Includes a distinction between machines which manipulate energy and forces, machines with manipulate matter and machines which process information. Concludes that we still have much to learn about computers and brains, and although it seems likely that brains are computers we don't yet know what sorts of computers they are.
Abstract:
Abstract:
Filename: ftp://ftp.cs.bham.ac.uk/pub/tech-reports/1998/CSRP-98-14.ps.gz
Title: Qualitative Decision Support using Prioritised Soft Constraints
Authors: Brian Logan and Aaron Sloman
Technical CSRP-98-14, University of Birmingham School of Computer Science,
1998.
Date: April 1998
Abstract:
Abstract:
Filename: Sloman.biota98.html
Filename: Sloman.biota.slides.ps
Filename: Sloman.biota.slides.pdf
Title: What sorts of brains can support what sorts of minds?
Authors: Aaron Sloman
Date: 19 Oct 1998
Abstract:
The HTML file is the abstract for an invited talk at the
DIGITAL BIOTA 2
Conference
The .ps and .pdf files are postscript and PDf files containing
slightly extended versions of the slides I presented at the conference.
Abstract:
This review summarises the main themes of Picard's book, some of which
are related to Damasio's ideas in Descartes' Error. In
particular, I try to show that not all secondary emotions need manifest
themselves via the primary emotion system, and therefore they will not
all be detectable by measurements
of physiological changes. I agree with
much of the spirit of the book, but disagree on detail.
NOTE: Rosalind Picard's reply to this review is available online
at
http://www.findarticles.com/cf_dls/m2483/1_20/54367782/p1/
Filename: Sloman.toolworkshop.slides.pdf
Filename: Sloman.toolworkshop.slides.ps
Title: Slides for presentation on: What's an AI toolkit for?
Authors: Aaron Sloman
Date: 24 July 1998
Abstract:
The paper "What's an AI toolkit for", presented at
AAAI-98 Workshop on Software Tools for Developing Agents
at AAAI98 in Madison, USA, July 1998, is listed below. This file
contains the slides (two slides per A4 page) prepared for the
presentation.
Filename: Sloman.twd98.ps (superseded)
Filename: Sloman.twd98.pdf (superseded)
Title: Diagrams in the Mind? (out of date)
NB: A revised version of this paper will appear in a book published by
Springer. The revised version is listed in
a later index file
in this directory.
Authors: Aaron Sloman
Invited paper for
Thinking With Diagrams conference
at Aberystwyth, Aug 1998.
Date: Aug 1998
Abstract:
Clearly we can solve problems by thinking about them. Sometimes we have
the impression that in doing so we use words, at other times diagrams or
images. Often we use both. What is going on when we use mental diagrams
or images? This question is addressed in relation
to the more general multi-pronged question: what are representations,
what are they for, how many different types are they, in how many
different ways can they be used, and what difference does it make
whether they are in the mind or on paper? The question is related to
deep problems about how vision and spatial manipulation work. It is
suggested that we are far from understanding what's going on. In
particular we need to explain how people understand spatial structure
and motion, and I'll try to suggest that this is a problem with hidden
depths, since our grasp of spatial structure is inherently a grasp of a
complex range of possibilities and their implications. Two
classes of examples discussed at length illustrate requirements for
human visualisation capabilities. One is the problem of removing
undergarments without removing outer garments. The other is thinking
about infinite discrete mathematical structures.
Abstract:
This paper attempts to characterise a unifying overview of the practice
of software engineers, AI designers, developers of evolutionary forms of
computation, designers of adaptive systems, etc. The topic overlaps with
theoretical biology, developmental psychology and perhaps some aspects
of social theory. Just as much of theoretical computer science follows
the lead of engineering intuitions and tries to formalise them, there
are also some important emerging high level cross disciplinary ideas
about natural information processing architectures and evolutionary
mechanisms and that can perhaps be unified and formalised in the future.
There is some speculation about the evolution of human cognitive
architectures and consciousness.
Abstract:
This paper discusses some of the requirements for the control
architecture of an intelligent human-like agent with multiple
independent dynamically changing motives in a dynamically changing only
partly predictable world. The architecture proposed includes a
combination of reactive, deliberative and meta-management mechanisms
along with one or more global "alarm" systems. The engineering design
requirements are discussed in relation our evolutionary history,
evidence of brain function and recent theories of Damasio and others
about the relationships between intelligence and emotions.
(The paper was completed in haste for a deadline and I forgot to
explain why Descartes was in the title. See Damasio 1994.)
Abstract:
To select an appropriate tool or tools to build an agent-based system we need
to map from features of agent systems to implementation technologies. In this
paper we propose a simple scheme for classifying agent systems. Starting from
the notion of an agent as a cluster concept, we motivate an approach to
classification based on the identification of features of agent systems, and
use this to generate a high level taxonomy. We illustrate how the scheme can
be applied by means of some simple examples, and argue that our approach can
form the first step in developing a methodology for the selection of
implementation technologies.
Abstract:
This paper identifies a collection of high level questions which need to
be posed by designers of toolkits for developing intelligent agents
(e.g. What kinds of scenarios are to be developed? What sorts of agent
architectures are required? What are the scenarios to be used for? Are
speed and ease of development more or less important than speed and
robustness of the final system?). It then considers some of the toolkit
design options relevant to these issues, including some concerned with
multi-agent systems and some concerned with individual intelligent
agents of high internal complexity, including human-like agents. A
conflict is identified between requirements for exploring new types of
agent designs and requirements for formal specification, verifiability
and efficiency. The paper ends with some challenges for computer science
theorists posed by complex systems of interacting agents.
Note: my slides presented at the workshop are described above.
NB This (1998f) paper is related to This (2009)
paper.
Filename: Sloman.consciousness.evolution.ps
Filename: Sloman.consciousness.evolution.pdf
Title: The evolution of what?
(Draft very long paper:- Comments welcome)
Authors: Aaron Sloman
Date: 2 Mar 1998 (DRAFT VERSION)
Abstract:
There is now a huge amount of interest in consciousness among
scientists as well as philosophers, yet there is so much confusion and
ambiguity in the claims and counter-claims that it is hard to tell
whether any progress is being made. This "position paper" suggests
that we can make progress by temporarily putting to one side questions
about what consciousness is or which animals or machines have it or how
it evolved. Instead we should focus on questions about the sorts of
architectures that are possible for behaving systems and ask what sorts
of capabilities, states and processes, might be supported by different
sorts of architectures. We can then ask which organisms and machines
have which sorts of architectures. This combines the standpoint of
philosopher, biologist and engineer.
If we can find a general theory of the variety of possible architectures
(a characterisation of "design space") and the variety of
environments, tasks and roles to which such architectures are well
suited (a characterisation of "niche space") we may be able to use
such a theory as a basis for formulating new more precisely defined
concepts with which to articulate less ambiguous questions about the
space of possible minds.
For instance our initially ill-defined concept ("consciousness") might
split into a collection of more precisely defined concepts which can be
used to ask unambiguous questions with definite answers.
As a first step this paper explores a collection of conjectures
regarding architectures and their evolution. In particular we explore
architectures involving a combination of coexisting architectural levels
including: (a) reactive mechanisms which evolved very early, (b)
deliberative mechanisms which evolved later in response to pressures on
information processing resources and (c) meta-management mechanisms that
can explicitly inspect evaluate and modify some of the contents of
various internal information structures.
It is conjectured that in response to the needs of these layers,
perceptual and action subsystems also developed layers, and also that an
"alarm" system which initially existed only within the reactive layer
may have become increasingly sophisticated and extensive as its inputs
and outputs were linked to the newer layers.
Processes involving the meta-management layer in the architecture could
explain the origin of the notion of "qualia". Processes involving the
"alarm" mechanism and mechanisms concerned with resource limits in the
second and third layers gives us an explanation of three main forms of
emotion, helping to account for some of the ambiguities which have
bedevilled the study of emotion. Further theoretical and practical
benefits may come from further work based on this design-based approach
to consciousness.
A deeper longer term implication is the possibility of a new science
investigating laws governing possible trajectories in design space and
niche space, as these form parts of high order feedback loops in the
biosphere.
Abstract:
This paper discusses agent architectures which are describable in terms
of the "higher level" mental concepts applicable to human beings,
e.g. "believes", "desires", "intends" and "feels". We
conjecture that such concepts are grounded in a type of information
processing architecture, and not simply in observable behaviour nor in
Newell's knowledge-level concepts, nor Dennett's "intentional stance."
A strategy for conceptual exploration of architectures in design-space
and niche-space is outlined, including an analysis of design trade-offs.
The
SIM_AGENT (SimAgent)
toolkit,
developed to support such exploration,
including hybrid architectures, is described briefly.
Abstract:
This is a hastily produced set of slides for a talk given at the Royal
Festival Hall on 21 Feb 1998 as part of a series of talks in the
South Bank Centre's Literature Programme. See
[Link Broken now]
http://www.sbc.org.uk/
The slides begin to apply the ideas developed in the Cognition and
Affect project to the analysis of architectural requirements for love
and various other emotional and affective states.
[THE SLIDES ARE PARTLY OUT OF DATE. See
Filename: Sloman-kd-love.pdf
Abstract:
Which agent architectures are capable of justifying descriptions in terms
of the 'higher level' mental concepts applicable to human beings? We
propose a new kind of architecture-based semantics for mentalistic
descriptions in which mental concepts (e.g. 'believes', 'desires',
'intends', 'mood', 'emotion', etc.) are grounded in assumptions
about information processing architectures, and not merely in concepts
based solely on Dennett's 'intentional stance'. These ideas have led to
the design of the SIM_AGENT toolkit which has been used to explore a
variety of such architectures.
Abstract:
How can a virtual machine X be implemented in a physical machine Y? We
know the answer as far as compilers, editors, theorem-provers, operating
systems are concerned, at least insofar as we know how to produce these
implemented virtual machines, and no mysteries are involved. This paper
is about extrapolating from that knowledge to the implementation of
minds in brains. By linking the philosopher's concept of supervenience
to the engineer's concept of implementation, we can illuminate both. In
particular, by showing how virtual machines can be implemented in
causally complete physical machines, and still have causal powers, we
remove some philosophical problems about how mental processes can be
real and can have real effects in the world even if the underlying
physical implementation has no causal gaps. This requires a theory of
ontological levels.
Note:
This is an extract from a much longer, evolving, paper, in part about
the relation between mind and brain, and in part about the more general
question of how high level abstract kinds of structures, processes and
mechanisms can depend for their existence on lower level, more concrete
kinds.
Abstract:
This is an attempt to characterise a new unifying generalisation of the
practice of software engineers, AI designers, developers of evolutionary
forms of computation, etc. This topic overlaps with theoretical biology,
developmental psychology and perhaps some aspects of social theory (yet
to be developed!). Much of theoretical computer science follows the lead
of engineering intuitions and tries to formalise them. Likewise there
are important emerging high level cross disciplinary ideas about
processes and architectures found in nature that can be unified and
formalised, extending work done in Alife and evolutionary computation.
This paper attempts to provide a conceptual framework for thinking about
the tasks.
Within this framework we can also find a new approach to the so-called
hard problem of consciousness, based on virtual machine functionalism,
and find a new defence for a version of the "Strong AI" thesis.
Abstract:
The objectives of this thesis are to elucidate adaptive change from a
design-stance, provide a detailed examination of the concept of
evolvability and computationally model agents which undergo both
genetic and cultural evolution. Using Sloman's (1994) design-based
methodology, Darwinian evolution by natural selection is taken as a
starting point. The concept of adaptive change is analysed and the
situations where it is necessary for survival are described. A wide
array of literature from biology and evolutionary computation is used
to support the thesis that Darwinian evolution by natural selection is
not a completely random process of trial and error, but has mechanisms
which produce trial-selectivity. A number of means of creating
trial-selectivity are presented, including reproductive, developmental,
psychological and sociocultural mechanisms. From this discussion, a
richer concept of evolvability than that originally postulated by
Dawkins (1989) is expounded. Computational experiments are used to show
that the evolvability producing mechanisms can be selected as they
yield, on average, 'fitter' members in the next generation that inherit
those same mechanisms. Thus Darwinian evolution by natural selection is
shown to be an inherently adaptive algorithm that can tailor itself to
searching in different areas of design space. A second set of
computational experiments are used to explore a trajectory in design
space made up of agents with genetic mechanisms, agents with learning
mechanisms and agents with social mechanisms. On the basis of design
work the consequences of combining genetic and cultural evolutionary
systems were examined; the implementation work demonstrated that agents
with both systems could adapt at a faster rate. The work in this thesis
supports the conjecture that evolution involves a change in replicator
frequency (genetic or memetic) through the process of selective-trial
and error-elimination.
Technical report CSRP-97-30, University of Birmingham School of Computer Science, 1997.Authors: Brian Logan and Aaron Sloman
Abstract:
Abstract:
Abstract:
Abstract:
Abstract:
The emotions are investigated from the perspective of an Artificial
Intelligence engineer attempting to understand the requirements and design
options for autonomous resource bound agents able to operate in complex and
dynamic worlds. Both natural and artificial intelligences are viewed as more
or less complex control systems. The field of agent architecture research is
reviewed and Sloman and Beaudoin's design for human-like autonomy introduced.
The agent architecture supports an emergent processing state, called {\em
perturbance}, which is a loss of control of thought processes. Perturbances
are a characteristic feature of many human emotional states. A broad but
shallow implementation of the agent architecture, called MINDER1, is
described. MINDER1 can support perturbant states and is an example of a
'protoemotional' agent. Several interrupt theories of the emotions are
critically reviewed, including the theories of Simon, Sloman, Oatley and
Johnson-Laird and Frijda. Criticisms of the theories are presented, in
particular how they fail to account for both learning and the mental pain and
pleasure associated with some emotional states. The field of machine
reinforcement learning is reviewed and the concept of a scalar quantity form
of value introduced. Forms of value occur in control systems that meet a
requirement for trial and error learning. A philosophical argument that {\em a
society of mind will require an economy of mind} is presented. The argument
draws on adaptive multi-agent system research and basic economic theory. It
generalises reinforcement learning to more complex systems with more complex
capabilities. A design hypothesis is proposed -- {\em the currency flow
hypothesis} -- that states that a scalar quantity form of value is a common
feature of adaptive systems composed of many interacting parts. A design
specification is presented for a motivational subsystem conforming to the
currency flow hypothesis and theoretically integrated with Sloman and
Beaudoin's agent architecture. An explanation of a subset of mental pain and
pleasure is provided in terms of an agent architecture monitoring its own
processes of reinforcement, or virtual 'currency flows'. The theory is
compared to Freudian metapsychology, in particular how currency flow avoids
the vitalism associated with Freud's concept of 'libidinal energy'. The
explanatory power of the resulting theory of {\em valenced perturbances}, that
is painful or pleasurable loss of control of attention, is demonstrated by
providing an architecturally grounded analysis of grief. It is shown that,
amongst other phenomena, intense mental pain and loss of control of thought
processes can be readily explained in information processing terms. The thesis
concludes with suggestions for further work and prospects for building
artificial emotional agents.
I outline a conjecture that many aspects of human mental functioning, including emotional states, can be explained in terms of an architecture approximately decomposable into three layers, with different evolutionary origins, shared with different animals. The oldest and most widespread is a *reactive* layer. A more recent development, probably shared with fewer animals is a *deliberative* layer. The newest layer is concerned with *meta-management* and may be found only in a few species. The reactive layer involves highly parallel, dedicated and fast mechanisms, capable of fine-tuning but no major structural changes. The deliberative layer involves the ability to create, compare, evaluate, select and act on enw complex structures (e.g. plans, solutions to problems, linguistic constructs), a process that requires much stored knowledge and is inherently serial and resource limited, for several different reasons.
Perceptual and action subsystems had to evolve corresponding layered architectures in order to engage with all these to greatest effect. The third layer is linked to phenomena involving self consciousness and self control (and explains the existence of qualia, as the contents of attentive processes).
Different sorts of emotional states and processes correspond to different architectural layers, and some of them are likely to arise in sophisticated artificial agents of the future.
A short introduction is given to the SIM_AGENT toolkit developed in Birmingham for research and teaching activities involving the design of agents each of which has complex interacting internal mechanisms running concurrently, including symbolic and "sub-symbolic" mechanisms. Some of the material overlaps with the Synthetic Minds poster, below.
in Luigia Carlucci Aiello and Stuart C. Shapiro (eds), Principles of Knowledge Representation and Reasoning: Proceedings of the Fifth International Conference (KR '96), Morgan Kaufmann Publishers, 1996, pp 627-638,
Abstract
This is a philosophical 'position paper', starting from the observation that we have an intuitive grasp of a family of related concepts of "possibility", "causation" and "constraint" which we often use in thinking about complex mechanisms, and perhaps also in perceptual processes, which according to Gibson are primarily concerned with detecting positive and negative affordances, such as support, obstruction, graspability, etc. We are able to talk about, think about, and perceive possibilities, such as possible shapes, possible pressures, possible motions, and also risks, opportunities and dangers. We can also think about constraints linking such possibilities. If such abilities are useful to us (and perhaps other animals) they may be equally useful to intelligent artefacts. All this bears on a collection of different more technical topics, including modal logic, constraint analysis, qualitative reasoning, naive physics, the analysis of functionality, and the modelling design processes. The paper suggests that our ability to use knowledge about "de-re" modality is more primitive than the ability to use "de-dicto" modalities, in which modal operators are applied to sentences. The paper explores these ideas, links them to notions of "causation" and "machine", suggests that they are applicable to virtual or abstract machines as well as physical machines. The concept of "possibility-transducer" is introduced. Some conclusions are drawn regarding the nature of mind and consciousness.
Abstract:
This paper explores the design and implementation of a societal
arrangement of reflexive and motivational agents which will act as the
building blocks for a more abstract agent within which the current
agents act as distributed dynamic processing nodes. We contest that
reactive, deliberative and other behaviours are required in complete
(intelligent) agents. We provide some architectural considerations on
how these differing forms of behaviours can be cleanly integrated and
relate that to a discussion on the nature of motivational states and the
mechanisms used for making decisions.
Abstract:
Abstract:
Title: Towards a Design-Based Analysis of Emotional Episodes
Authors: Ian Wright, Aaron Sloman, Luc Beaudoin
Date: Oct 1995 (published 1996)
Appeared (with commentaries) in Philosophy Psychiatry and Psychology, vol 3 no 2, 1996, pp 101--126.
The commentaries, by
(This is a revised version of the paper presented to the Geneva Emotions Workshop, April 1995 entitled The Architectural Basis for Grief.)
Abstract:
The design-based approach is a methodology for investigating mechanisms capable of generating mental phenomena, whether introspectively or externally observed, and whether they occur in humans, other animals or robots. The study of designs satisfying requirements for autonomous agency can provide new deep theoretical insights at the information processing level of description of mental mechanisms. Designs for working systems (whether on paper or implemented on computers) can systematically explicate old explanatory concepts and generate new concepts that allow new and richer interpretations of human phenomena. To illustrate this, some aspects of human grief are analysed in terms of a particular information processing architecture being explored in our research group.We do not claim that this architecture is part of the causal structure of the human mind; rather, it represents an early stage in the iterative search for a deeper and more general architecture, capable of explaining more phenomena. However even the current early design provides an interpretative ground for some familiar phenomena, including characteristic features of certain emotional episodes, particularly the phenomenon of perturbance (a partial or total loss of control of attention).
The paper attempts to expound and illustrate the design-based approach to cognitive science and philosophy, to demonstrate the potential effectiveness of the approach in generating interpretative possibilities, and to provide first steps towards an information processing account of 'perturbant', emotional episodes.
Many of the architectural ideas have been developed further in later papers and presentations, all available online, e.g.
- Online presentations (mainly pdf)
- The Architectural Basis of Affective States and Processes
Aaron Sloman, Ron Chrisley and Matthias Scheutz
In Who Needs Emotions?: The Brain Meets the Robot, Ed. M. Arbib and J-M. Fellous, Oxford University Press, Oxford, New York, 2005
Abstract:
What is the relation between intelligence and computation? Although the
difficulty of defining 'intelligence' is widely recognized, many are
unaware that it is hard to give a satisfactory definition of
'computational' if computation is supposed to provide a non-circular
explanation for intelligent abilities. The only well-defined notion of
'computation' is what can be generated by a Turing machine or a formally
equivalent mechanism. This is not adequate for the key role in
explaining the nature of mental processes, because it is too general, as
many computations involve nothing mental, nor even processes: they are
simply abstract structures. We need to combine the notion of
'computation' with that of 'machine'. This may still be too restrictive,
if some non-computational mechanisms prove to be useful for
intelligence. We need a theory-based taxonomy of {\em architectures} and
{\em mechanisms} and corresponding process types. Computational machines
may turn out to be a sub-class of the machines available for implementing
intelligent agents. The more general analysis starts with the notion of
a system with independently variable, causally interacting sub-states
that have different causal roles, including both 'belief-like' and
'desire-like' sub-states, and many others. There are many significantly
different such architectures. For certain architectures (including
simple computers), some sub-states have a semantic interpretation for
the system. The relevant concept of semantics is defined partly in terms
of a kind of Tarski-like structural correspondence (not to be confused
with isomorphism). This always leaves some semantic indeterminacy, which
can be reduced by causal loops involving the environment. But the causal
links are complex, can share causal pathways, and always leave mental
states to some extent semantically indeterminate.
See also the School of Computer Science Web page.
This file is maintained by
Aaron Sloman, and designed to be
lynx-friendly,
and
viewable with any browser.
Email: A.Sloman@cs.bham.ac.uk