next up previous contents
Next: Bibliography Up: EMuds Previous: Experimentation

Subsections

Conclusion

The work presented in this thesis originates in a global effort to understand and implement multi-agent systems in the context of open computer networks as a solution for distributed process optimization and management. In the current work, the element of this objective that is addressed is the modeling of a virtual environment that can be mapped over a network and allows agents to be embodied in the network. It is in the course of this realization that philosophical questions about the nature of agents and their relation to an environment were approached. The ensuing study of the symbol grounding problem has then served as a general context for the overall work. The results that can be drawn from the thesis are thus twofold: on the practical side with the realization of an environment model for distributed agent systems and on the conceptual side with an attempt at formulating an operational theory of meaning.

Practical Aspects

Using a computer game metaphor, the realization of a general environment model was designed and implemented. In this model, agents are situated in places of a virtual physical space that forms a virtual topology. The nodes of the topology may contain both active and passive elements corresponding to processes and data. Virtual physical laws govern the evolution of the environment, as agents move or in act on features of the environment structure.

Design

The environment model is designed as a collection of interconnected worlds in which agents act in the pursuit of their individual tasks. With the locus/exit notion, network topologies can be implemented with loci as computer nodes and exits as the network connections linking computers. In this artificial physical space, elements can be situated to represent data objects or processes. The agents in this type of environment are elements imbued with control algorithms that allow them to operate in the environment. Perception channels associated to agents let them detect the local environment features and sense changes in the environment.

Taking inspiration from games, the notion of human actors in the environment is also included in the model by allowing humans to take control of agents in the environment via remote connections. With this feature, human-computer interaction can also be studied in view of future work on human-agent interfacing.

In the future, taking advantage of the available features of the model, a further step that needs to be taken into account is the observation methods available for the study of dynamics in such an environment. In this direction, observation tools and a formalization of the observer situation should be added. Typically, degrees of visibility of elements in the environment were considered an important future extension in the preliminary phase of the project. This concept would allow observation of systems with minimal perturbation. In the prototype application, this had been implemented as two different skills, first, the possibility for an human to connect to a preexistent agent by a ``snoop'' command. In effect, snooping allowed a person to perceive the world from the perspective of the agent (entering the body of the agent and experiencing the same perceptions), but without allowing him to act. The second characteristic that had been implemented to that effect was that of allowing elements to be invisible to others. In the current model, this ability has been partially replaced by the notion of perception channels, but has not yet been implemented.

Another important missing feature is that of grouping or splitting elements into subcomponents. Initially, this was not considered fundamental, but with the work of my collegue P. Lerena on Bio-machines in the context of the AXE project [35] this has become an important notion. The study he is leading with Bio-machines is focussed on the evolution of populations of agents that are allowed to reproduce both sexually and asexually and combine into multicellular entities. With this last possibility, the combination notion becomes essential in EMuds if he is to use the environments for future experimentation.

Implementation

The implementation of the previous model takes the form of the EMud application, which is a simulation of the model on a single computer. In EMuds, the whole world in which agents may evolve is built by writing definition files for the environment state and population of elements. The dynamics of the environment are then simulated as if the environment was distributed over a network of computers. Human actors can also connect to the application with the telnet utility, as on a Mud, and interact with the agents evolving in the environment. Through the definition of skills, the virtual physical laws of the environment can be set up and creation or destruction of objects in the environment regulated. Some sample environments in which multiple agents coexist have been written and the current demonstration of the EMud runs on one of the computers at the Fribourg University, allowing anyone to connect and explore the current demonstration world.

In future work, the features of the model that must be introduced in the application are the generalized perception channels concept (only one perception channel now exists for all agents), interlinking of zones, many more skills and dynamic environment generation. Other aspects from which the application would benefit if they were implemented are an inline control algorithm scripting language, a character generation scheme (a gaming and agent specification feature), and a graphical interface for environment creation. If the model then proves to be relevant, a distributed implementation should be written.

Theoretical Aspects

The writing of this Ph.D. has brought me to realize that unlike many other scientific disciplines, artificial intelligence is faced with difficulties even in defining the problems it has to solve. Much work in the field is devoted to finding better algorithms for specific problems that are either invented for a particular algorithm or type of algorithm, or found in a real world environment. The reasons for this situation are multiple, but two reasons that stand out are, on the one hand, related to philosophical problems about the nature of intelligence and of the mind, on the other hand, with the scientific limits of understanding questions about complexity, information and algorithm dynamics.

Philosophical Problems

I begin this thesis with a short discussion about the philosophical perspective on artificial intelligence and introduce the symbol grounding problem, which is considered, at least by philosophers, to be the main obstacle to machine intelligence. In discussing this problem, I show that if intelligence is an emergent property of the functioning brain, one can reduce the symbol grounding problem to a problem of discovering equivalent functional properties in computer algorithms, which is an implicit assumption most AI researchers make.

Reducing the problem in this sense brings me to consider the concepts of meaning and understanding in terms of the causal role of components (symbols) in an algorithm and, respectively, compression of information within the description language that constitutes this algorithm. Although this view can be further discussed from a philosophical point of view, it has the advantage (when used as a working hypothesis) of giving a scientific base on which a theory can be built.

The Information Problem

Unfortunately, even reducing the philosophical question in such a way does not produce an immediate solution to the problems of artificial intelligence. Our knowledge about the nature of information is very limited. What is currently known is a good definition of an algorithm and from this definition, a measure of algorithmic information can be created. Even if this notion of algorithmic information allows the demonstration of very interesting mathematical results (typically about randomness), it has practical limitations related to the fact that it is built as a theoretical universal measure that attempts to capture a unique information content value in a description. I do not believe this is possible because the information content of a description must relate to an observer of this description. With algorithmic information, this dependence on an observer is evacuated by accepting that information is defined up to a finite constant value that depends on the language used for calculating the information content of a description, which is theoretically satisfactory. My objection is that artificial intelligence is not only a theoretical problem, but also essentially a practical problem, that is, finding implementations of algorithms that exhibit high level autonomous behaviors. Here, the agent that must extract the information from the environment is an observer and if, in his terms, the information content of what is perceived differs by a ``finite constant'' value from what another agent can extract from the environment, this might make a great difference.

Dealing with these Questions

I suggest two courses of action for future research in artificial intelligence, based on the preceding observations. The first method I feel would greatly improve research in the field is a unification of the many results that have already been published in the discipline. To do this, a categorization of the properties of all the algorithms that have been studied should be attempted by first designing a set of problems that are considered important on the path to machine intelligence. Based on this set of problems, some benchmarking experiments can be devised to evaluate the functional properties of each type of algorithm, allowing comparisons to be made between methods. Most potential test experiments have in fact already been invented since every school of AI has used some method of evaluating the results of their algorithms. The experimental part of the thesis provides an idea of how this classification may be initiated and gives a basic environment model in which benchmarking can be done for a wide variety of problems.

A theoretical approach to the information problem is to study the ``practical'' properties of Turing machines, since even the so-called universal Turing machines differ greatly in practice. The difficulties underlying this approach are twofold. First, the set of Turing machines is infinite and it might not be possible to give a categorization of these machines into general classes of machines, with some classes relevant to the problems found in AI and others not. Second, the mathematical tools available for this investigation are not well suited for distinguishing practical cases and theoretical cases. One attempt I put forward is by applying non-standard analysis to the theory of Turing machines. In this mathematical theory, the term standard allows a distinction to be made between standard objects (those that can be observed in practice) and non-standard objects (the theoretical objects). This permits a distinction to be made between standard and non-standard Turing machines. It remains to be seen whether fundamental properties of these two classes of Turing machines can be exhibited with this theory.


next up previous contents
Next: Bibliography Up: EMuds Previous: Experimentation
Antony Robert
2000-02-11