next up previous contents
Next: Games and Text-based Virtual Up: EMuds Previous: Symbolic or Functional Grounding

Subsections

Artificial Intelligence and Agent Systems

A lot of work in the field of artificial intelligence has recently been devoted to multi-agent systems. Since the AXE project of which this Ph.D. is part has been focussed largely on the concepts of autonomy and coordination, I will present in this chapter the position our group has taken towards these issues.

Autonomous Agents

Agents and autonomous agents

The term agent is commonly used to describe a software or hardware component of an open system that can be considered and studied in relation to this system. This implies some separation from the system in question, so that the agent can be considered as a distinct entity within the system, dividing the system into an agent part versus environment part (the agent part being a symbol system). The term agent is usually used to emphasize the fact that a component in a system has specific properties that have an interest of their own. It is important to be aware that this definition is not formal in that sense that any object in an object-oriented programming language or any function in an imperative language ``could'' be called an agent, but one relies on the fact that intuitively the word agent is used only for components that have the look and feel of behavioral entities. Also, there has already been much abuse of the term in precisely this sense.

Usually, some component in a system is thought of as an agent when its interaction with this system can be interpreted as operated through sensory-motor equipment (software or hardware). An artificial life entity and a robot are the two typical examples of respectively software and hardware agents. The artificial life entity is precisely studied for its seemingly independent behavior in a programmed environment and robots clearly interact with the real world through sensors and effectors. The fact that an agent is positioned in an environment with which it has direct contact through its sensors is called situatedness in agent theory and emphasizes the fact that the agent is placed in the context of its problem domain and not in a representation of this problem. A term that is closely related to situatedness and characterizes the fact that the agent receives direct feedback from its own actions in the environment is embodiment. Embodiment highlights the structural coupling between the agent and its environment, implying that the agent experiences changes in the environment and will have its behavioral principles changed by the environment's evolution.

Agent terminology is often used in conjunction with autonomy [19] as an agent is deemed interesting for artificial intelligence only if it owns some important feature of natural living systems. Under this criterion, the most highly regarded attribute of living things is their ability to insure their own survival in the world through constant adaptation of behavior. The term autonomy is used to describe the status of an agent that is self-sufficient in some way and sometimes autonomous agent is implicit in the term agent. An extreme example of this is when Sloman defines agents as ``behaving systems with something like motives'', that is, to him, agents are things that have some form of free will [58] and are thus necessarily autonomous. We will see that autonomy can have a range of meanings in the next paragraph.

Behavior and Autonomy

Behavior describes the pattern of actions that an agent expresses when confronted with its environment. For an autonomous agent, the behavior is the means by which it achieves its objectives and which make it autonomous. An agent may express a wide range of behaviors and these will give an indication of its degree of autonomy. On the lowest level, I have described an agent as being capable of interacting with its environment. For an agent to be considered autonomous, it is commonly admitted that the agent must at least be automatic, that is, the agent must be able to operate in its environment, sense it and impact it in ways that are beneficial to the task it must accomplish [59]. From there, an agent is considered as operationally autonomous when it is independent of human intervention, the agent can accomplish its goals on its own (in ``normal'' situations) [46]. The highest level of autonomy is reached when the agent is behaviorally autonomous, when it is additionally able to generate new behaviors originating in its own past experience of interactions with the world [59]. Thus, a behaviorally autonomous agent would be able to store past experience and interpret it in order to adapt its behavior to recurring or new situations, thus aiming at an optimization of its behavior in view of its goal.

Of course this description of autonomy is by no means exhaustive and could be further refined, but the steps of automaticity, operational autonomy and behavioral autonomy match the various types of agents that are currently used or studied. Actually, automaticity is what is achieved in many engineering solutions for assembly line robots. A very good example of operational autonomy is found in the Mars Pathfinder Rover robotic agent [60]. The rover is a robotic agent equipped with various sensors that allow it to monitor power consumption, obstacle proximity, wheel position, etc. Moving it around on the ground of Mars is achieved in a two step process: a human operator views the Mars site around the rover from a 3D reconstruction of what the robot has captured with its stereo imaging system and he designates a ``safe'' path from the current robot position to the goal position by defining a sequence of waypoints, the robot then proceeds to the goal without any further human interaction. In this process, the agent is told where it has to go and which different intermediate positions lead to that objective, thus the agent is not even automatic in this planning procedure; then when the operator has finished this description, the robot enters an operationally autonomous state where it deals with all the real-time problems it encounters while transiting from one point to another on the surface of Mars, such as bumper-rock contact and then avoidance.

In general, operational autonomy is what artificial intelligence can efficiently achieve and behavioral autonomy is what is sought to be achieved (and sometimes done in a limited manner). The category of behaviorally autonomous agents is in fact open and contains human agents at its higher levels. It is implicitly admitted here that the faculty of cognition is a property of systems possessing the highest levels of behavioral autonomy that can be achieved in agents.

Multi-agent Systems

A multi-agent system (MAS) is a system where multiple agents coexist in a common environment. In comparison with our previously defined agent system where a single agent is coupled to an environment through sensors and effectors, in a MAS, a family of agents is considered and each agent of the family is coupled with the environment that includes its fellow agents. On the other hand, since the family of agents can itself be considered as a single agent with all of the individually available perceptual equipment (or effectors) brought together as a single sensor (resp. effector), multi-agent systems can give rise to a recursive definition of an agent system [37] or can be viewed as a refinement to agent systems (see figure 4.1).
  
Figure 4.1: Formal description of a multi-agent system.
\fbox{\makebox[11.5cm]{\rule[-2.7cm]{0cm}{5.5cm}
\begin{minipage}{11cm}
For an a...
...$A$\space and $E$\space and between the
$A_i$\space themselves.
\end{minipage}}}

The central idea of multi-agent systems is that agents within a system may work together (cooperate) or against each other (compete), but as a whole bring forth a collective behavior and it is this collective behavior that will be the focus of research. This idea is simply advocated by Marvin Minsky as agents are the members of a population that together produce a behaving system with motives in [40]. What is attempted here is the application of a reduction principle on the individual behaviors in the sense of the question ``what elemental behaviors can be brought together in interaction for some interesting meta-level behavior to be generated?'' and the inspiration of this approach results from the observation of natural systems that rely on some collective behavior to successfully achieve autonomy. Typical often cited examples of such systems are anthills, beehives or termite colonies, but could also be every multicellular lifeform.

Up until now, I have held a view that was directed at interaction as information exchange between an agent and an environment. Within multi-agent systems, interaction between the various agents composing the system is fundamental. This shift of interest is grounded on the basic assumption that designing elementary behaviors for agents and hierarchically composing them is easier than hard coding intelligent behavior into one entity. Another interesting possibility that is often explored is that collective behavior will not be trivially derived from the singular behaviors and might in fact be more than what the individual behaviors could accomplish, or as in the established formula: the whole is greater than the sum of the parts, which brings us naturally to our next subsection.

Interaction and Emergence

Interaction is the history of transformations that have been effected by an agent on its environment and by the environment on the agent trough its sensors. Interaction is produced when a situated, embodied and behavioral agent ``lives'' in an environment. An agent is thus said to be in interaction with the environment when it is actively transforming and being transformed by its surroundings.

When the environment is composed of multiple agents, this definition can be extended to agent interactions as in the following: interactions are the behavioral patterns that form between agents coexisting within an environment. Knowing that agents may transform their surrounding environment by exercising their manipulative abilities, they are able to modify the information perceived by other agents in the same environment and thereby change their future actions (an agent's behavior depends on its perceptions). Since this ability to modify the behavior of another agent works both ways, interaction appears as a ping-pong process of behavior transformation between two agents, an effect dubbed specularity by J.-P. Dupuy in the context of human interaction [16]. When more than two agents coexist in an environment, interaction between all the agents quickly becomes difficult to apprehend (the number of interaction relations between agents grows on the order of the square of the number of agents involved).

A designer or observer of a MAS may call a behavior emergent when this behavior cannot easily be deduced from the individual properties of the agents in the system, or when these properties are not readily accessible. Emergence arises from the interactions of the agents in the system in the same way as the chemical properties called viscosity or fluidity ``emerge'' from the physical properties of the molecules composing the liquid. Once again, it is vital to emphasize the role of the observer in this definition, the notion of emergence is an epistemological one: a property of a collective emerges from the individual properties when the most adequate tools used to study the individuals are not the same as those needed to study the collective. Clearly, recalling my previous example, if quantum mechanics were a perfect model of reality and we had access to perfect tools, viscosity would be most adequately studied through quantum effects and elementary particles.

Agent Methodology

Artificial intelligence has tried to solve the problem of intelligence by attempting to synthesize it in artifacts (more specifically, computers). Over the years, its methods have evolved from reasoning over models of problem domains to emergence of behaviors in situated artifacts. I will here motivate this evolution by going through the two major steps that led the field to its current state and describe what can be done to build behavioral agents.

Top-down Approach

The classical artificial intelligence perspective on intelligent behavior is that of reasoning. That is, an intelligent agent trying to solve a problem is expected to try and summarize its problem, make an abstraction of it (in the form of symbols and expressions combining the symbols) and then apply deduction and inference to extract a solution from the symbolic representation of the problem, again in the form of an expression. It is then supposed to convert the answer back into the problem domain from the abstract representation it has of this domain.

This approach reflects the methodology of the natural sciences as described by Herbert Simon in [55]: The central task of a natural science is to make the wonderful commonplace: to show that complexity, correctly viewed, is only a mask for simplicity; to find pattern in apparent chaos. Here, intelligence is equated to the ability of reasoning upon a (reduced) model of reality. The hypothesis is then that a physical symbol system, a device that can hold and manipulate symbols in a representation, is sufficiently general to be able to produce intelligent action. This framework is called top-down because its starting assumptions are about high level cognitive capacities such as modeling, planning and reasoning. It is from these assumptions that classical artificial intelligence expects to generate intelligence that will deal even with the down to earth problems that a living being encounters every day.

Bottom-up Approach

New AI on the other hand considers intelligence as emergent from a systems ability to deal with many simple problems that appear in its interaction with an environment. For this bottom-up approach, the high level cognitive abilities such as forming internal representations of the environment and reasoning on these representations is a feature that an agent might develop if faced with an environment of sufficient complexity, but is by no means preexisting in an agent. The aim of new AI is to build agents that can deal with their environment by reacting to changes in this environment in a way that will ensure their continued integrity, usually this is accomplished by providing the agent with a set of basic abilities and bringing it to arrange the use of these abilities in search of an efficient, self-sustaining behavior. An agent may then be called intelligent if it is able to deal with problems that seem to require intelligence, but there is no preliminary assumption about what the internal processes of intelligence in an agent are or should be.

Adaptation

The earliest methods used in artificial intelligence had the intention of programming behavior in an agent by attempting to specify all the possible situations the agent could encounter and providing it with means to cope with such situations. Unfortunately, such techniques have proven not to be scalable since the programmer is responsible for thinking of every possible situation the agent might encounter and the number of such situations follows an explosively growing curve as the complexity of the environment grows (this has been called the frame problem for classical artificial intelligence). To counter this problem, the available solutions are either to use generalization in problem recognition and thus give an agent some default behavior to use when an unforeseen situation occurs, an unsatisfactory solution in many cases, or to allow the agent to build new solutions for unforeseen situations through some adaptive algorithm.

I will use here the term adaptation for any internal mechanism that allows an agent to modify its own behavior-generation processes depending on the experience it acquires from the environment. Actually, almost all artificial intelligence research is now devoted to understanding adaptation. This is due to the transfer of effort from the concept of intelligence as an explicitly programmed feature in an agent, to the concept of intelligence as an emergent property of agent interaction with the environment. Since we are not able to write a definition of intelligence, we cannot make a model of intelligence and implement it in an algorithm, rather, we expect to produce the appearance of properties akin to intelligence in a system where interaction between components occurs.

The fundamental difference between adaptive agents and non-adaptive agents is that for an adaptive agent, different environmental conditions will produce different agents after some time, whereas a non-adaptive agent will still behave in the same way after any period of time it is left in the environment. It should be noted that most often, typically for agents that are used to solve real problems, the agent is adapted in some preliminary phase, but is then used as an non-adaptive agent in the state that was reached through the adaptation phase. The reason for this is that an analysis of the agent behavior can then be made on the adapted agent before it is used on the real problem environment where failure could prove dangerous or costly to the user.

I identify only two basic techniques used today to induce adaptation in agents at this time, these are learning and evolution which I will now describe.

Learning


  
Figure 4.2: A learning agent.
\begin{figure}\begin{center}\epsfxsize=10cm
\epsfbox{Illustrations/agents_learning.eps}\end{center}\end{figure}

Learning is an internal (ontogenetic) adaptation mechanism, whereby an agent sees its internal behavior generating algorithm modified by the effect of external environmental conditions perceived through its sensors. There are many specific learning techniques that can be used to this goal but essentially the process is the same for all of these. We can assume that a learning agent has a two layer internal algorithm, the first layer is the behavior generating algorithm component and the second acts as a critic of the first layer. Whenever the first layer generates an action based on its perceptions or internal state, the critic attempts to evaluate the quality of the action with respect to the agent's predefined objective. Based on this evaluation, the critic may change the behavior generating algorithm layer. To pursue its goal, the critic in a learning agent may have access to more information than the agent algorithm in some cases. For example, supervised learning techniques use feedback from an external observer of the system to provide the critic with an evaluation of the agent's performance.


  
Figure 4.3: Evolving a population of agents.
\begin{figure}\begin{center}\epsfxsize=11cm
\epsfbox{Illustrations/agents_evolution.eps}\end{center}\end{figure}

Evolution

Evolution is an external (phylogenetic) adaptation mechanism operating on a population of agents, whereby successive generations of agents are modified according to the relative success of previous agents operating in the environment. Evolutionary techniques are inspired by the Darwinist theory of natural selection, where environmental pressure is used to ensure the survival of fittest. In an evolutionary algorithm, a population of agents is considered and an external critic belonging to the environment evaluates the relative quality of individuals in the population of agents by assigning to each of them a fitness value. New individuals are then occasionally created by combining the properties of multiple individuals in the population. The selection of which individuals will contribute to the creation of the new individuals is biased towards high fitness individuals. In the illustration of this process 4.3, I show the critic as an agent that is external to the evolving population. In fact, this is a representational abstraction: the critic may very well be a distributed entity that is partially incorporated in the evolving population through competition mechanisms as is the case for evolution in the real world.


next up previous contents
Next: Games and Text-based Virtual Up: EMuds Previous: Symbolic or Functional Grounding
Antony Robert
2000-02-11