Usually, some component in a system is thought of as an agent when its interaction with this system can be interpreted as operated through sensory-motor equipment (software or hardware). An artificial life entity and a robot are the two typical examples of respectively software and hardware agents. The artificial life entity is precisely studied for its seemingly independent behavior in a programmed environment and robots clearly interact with the real world through sensors and effectors. The fact that an agent is positioned in an environment with which it has direct contact through its sensors is called situatedness in agent theory and emphasizes the fact that the agent is placed in the context of its problem domain and not in a representation of this problem. A term that is closely related to situatedness and characterizes the fact that the agent receives direct feedback from its own actions in the environment is embodiment. Embodiment highlights the structural coupling between the agent and its environment, implying that the agent experiences changes in the environment and will have its behavioral principles changed by the environment's evolution.
Agent terminology is often used in conjunction with autonomy [19] as an agent is deemed interesting for artificial intelligence only if it owns some important feature of natural living systems. Under this criterion, the most highly regarded attribute of living things is their ability to insure their own survival in the world through constant adaptation of behavior. The term autonomy is used to describe the status of an agent that is self-sufficient in some way and sometimes autonomous agent is implicit in the term agent. An extreme example of this is when Sloman defines agents as ``behaving systems with something like motives'', that is, to him, agents are things that have some form of free will [58] and are thus necessarily autonomous. We will see that autonomy can have a range of meanings in the next paragraph.
Of course this description of autonomy is by no means exhaustive and could be further refined, but the steps of automaticity, operational autonomy and behavioral autonomy match the various types of agents that are currently used or studied. Actually, automaticity is what is achieved in many engineering solutions for assembly line robots. A very good example of operational autonomy is found in the Mars Pathfinder Rover robotic agent [60]. The rover is a robotic agent equipped with various sensors that allow it to monitor power consumption, obstacle proximity, wheel position, etc. Moving it around on the ground of Mars is achieved in a two step process: a human operator views the Mars site around the rover from a 3D reconstruction of what the robot has captured with its stereo imaging system and he designates a ``safe'' path from the current robot position to the goal position by defining a sequence of waypoints, the robot then proceeds to the goal without any further human interaction. In this process, the agent is told where it has to go and which different intermediate positions lead to that objective, thus the agent is not even automatic in this planning procedure; then when the operator has finished this description, the robot enters an operationally autonomous state where it deals with all the real-time problems it encounters while transiting from one point to another on the surface of Mars, such as bumper-rock contact and then avoidance.
In general, operational autonomy is what artificial intelligence can efficiently achieve and behavioral autonomy is what is sought to be achieved (and sometimes done in a limited manner). The category of behaviorally autonomous agents is in fact open and contains human agents at its higher levels. It is implicitly admitted here that the faculty of cognition is a property of systems possessing the highest levels of behavioral autonomy that can be achieved in agents.
The central idea of multi-agent systems is that agents within a system may work together (cooperate) or against each other (compete), but as a whole bring forth a collective behavior and it is this collective behavior that will be the focus of research. This idea is simply advocated by Marvin Minsky as agents are the members of a population that together produce a behaving system with motives in [40]. What is attempted here is the application of a reduction principle on the individual behaviors in the sense of the question ``what elemental behaviors can be brought together in interaction for some interesting meta-level behavior to be generated?'' and the inspiration of this approach results from the observation of natural systems that rely on some collective behavior to successfully achieve autonomy. Typical often cited examples of such systems are anthills, beehives or termite colonies, but could also be every multicellular lifeform.
Up until now, I have held a view that was directed at interaction as information exchange between an agent and an environment. Within multi-agent systems, interaction between the various agents composing the system is fundamental. This shift of interest is grounded on the basic assumption that designing elementary behaviors for agents and hierarchically composing them is easier than hard coding intelligent behavior into one entity. Another interesting possibility that is often explored is that collective behavior will not be trivially derived from the singular behaviors and might in fact be more than what the individual behaviors could accomplish, or as in the established formula: the whole is greater than the sum of the parts, which brings us naturally to our next subsection.
When the environment is composed of multiple agents, this definition can be extended to agent interactions as in the following: interactions are the behavioral patterns that form between agents coexisting within an environment. Knowing that agents may transform their surrounding environment by exercising their manipulative abilities, they are able to modify the information perceived by other agents in the same environment and thereby change their future actions (an agent's behavior depends on its perceptions). Since this ability to modify the behavior of another agent works both ways, interaction appears as a ping-pong process of behavior transformation between two agents, an effect dubbed specularity by J.-P. Dupuy in the context of human interaction [16]. When more than two agents coexist in an environment, interaction between all the agents quickly becomes difficult to apprehend (the number of interaction relations between agents grows on the order of the square of the number of agents involved).
A designer or observer of a MAS may call a behavior emergent when this behavior cannot easily be deduced from the individual properties of the agents in the system, or when these properties are not readily accessible. Emergence arises from the interactions of the agents in the system in the same way as the chemical properties called viscosity or fluidity ``emerge'' from the physical properties of the molecules composing the liquid. Once again, it is vital to emphasize the role of the observer in this definition, the notion of emergence is an epistemological one: a property of a collective emerges from the individual properties when the most adequate tools used to study the individuals are not the same as those needed to study the collective. Clearly, recalling my previous example, if quantum mechanics were a perfect model of reality and we had access to perfect tools, viscosity would be most adequately studied through quantum effects and elementary particles.
This approach reflects the methodology of the natural sciences as described by Herbert Simon in [55]: The central task of a natural science is to make the wonderful commonplace: to show that complexity, correctly viewed, is only a mask for simplicity; to find pattern in apparent chaos. Here, intelligence is equated to the ability of reasoning upon a (reduced) model of reality. The hypothesis is then that a physical symbol system, a device that can hold and manipulate symbols in a representation, is sufficiently general to be able to produce intelligent action. This framework is called top-down because its starting assumptions are about high level cognitive capacities such as modeling, planning and reasoning. It is from these assumptions that classical artificial intelligence expects to generate intelligence that will deal even with the down to earth problems that a living being encounters every day.
I will use here the term adaptation for any internal mechanism that allows an agent to modify its own behavior-generation processes depending on the experience it acquires from the environment. Actually, almost all artificial intelligence research is now devoted to understanding adaptation. This is due to the transfer of effort from the concept of intelligence as an explicitly programmed feature in an agent, to the concept of intelligence as an emergent property of agent interaction with the environment. Since we are not able to write a definition of intelligence, we cannot make a model of intelligence and implement it in an algorithm, rather, we expect to produce the appearance of properties akin to intelligence in a system where interaction between components occurs.
The fundamental difference between adaptive agents and non-adaptive agents is that for an adaptive agent, different environmental conditions will produce different agents after some time, whereas a non-adaptive agent will still behave in the same way after any period of time it is left in the environment. It should be noted that most often, typically for agents that are used to solve real problems, the agent is adapted in some preliminary phase, but is then used as an non-adaptive agent in the state that was reached through the adaptation phase. The reason for this is that an analysis of the agent behavior can then be made on the adapted agent before it is used on the real problem environment where failure could prove dangerous or costly to the user.
I identify only two basic techniques used today to induce adaptation in agents at this time, these are learning and evolution which I will now describe.