Agents

An agent is an animate entity that is capable of doing something on purpose. That definition is broad enough to include humans and other animals, the subjects of verbs that express actions, and the computerized robots and softbots. But it depends on other words whose meanings are just as problematical: animate, capable, doing, and purpose. The task of defining those words raises questions that involve almost every other aspect of ontology.

For the primitive terms of any theory, circular definitions are inevitable. As an example, Newton's famous equation F=ma appears to define the force F in terms of the mass m and the acceleration a. Yet that same equation could be used to define the mass in terms of the force and the acceleration. Newton assumed that acceleration could be independently defined in terms of space and time, but Einstein showed that the structure of space and time itself depends on the mass of the entities in it. The fundamental concepts of any subject can only be defined implicitly by laws or axioms that express a pattern of relationships among them. Closed-form definitions are never possible for basic primitives.

Psychology of Agents

Linguistically, an agent is an animate being that can perform some action, and an action is an event that is initiated or carried out by some animate being. The circularity in those definitions can be broken by determining what characteristics of an animate being are necessary for it to play the role of an agent. Then those features can be generalized to a definition of agent that applies to people, animals, robots, and certain kinds of computer programs.

The word animate comes from the Latin anima, which means breath or soul. The medieval Scholastics used anima as a translation of the Greek psychê, which also means breath or soul. The basis for the modern terminology is Aristotle's treatise Peri Psychês, which is called De Anima in Latin or On the Soul in English. Aristotle defined the psyche as the logos or principle that determines what it is for something to be a living entity. Instead of a single principle of the psyche that covered all living things, Aristotle found six related functions, which he arranged in a hierarchy: nutrition, perception, desire, locomotion, imagery, and thought:

We must inquire for each kind of living thing, what is its psyche; what is that of a plant, and what is that of a human or a beast. The reason why the functions are arranged in this order must also be considered. For without nutrition, there does not exist perception, but in plants, nutrition is found without perception. Again, without the sense of touch none of the other senses exists, but touch exists without the others, for many animals have neither vision nor hearing nor sense of smell. And of those that can perceive, some have locomotion, while others have not. Finally and most rarely, they have reason and thought. Those mortal creatures that have reason have all the rest, but not all those that have each of the others have reason; some do not even have imagery, but others live by this alone. The rational intellect requires a separate principle (logos). An appropriate definition of each of these functions would be the most appropriate for the psyche as well. [414b32]
Aristotle's hierarchy of functions was based on his extensive study of the plants and animals known in his day. With his criteria, he was the first to recognize that sponges were primitive animals rather than plants. The subdivisions in the tree of Porphyry (Figure 1.1 in the book Knowledge Representation), are based on Aristotle's distinctions of animate/inanimate, sensitive/insensitive, and rational/irrational.

Competence Levels

Aristotle's hierarchy resembles the competence levels that Rodney Brooks (1986) defined for mobile robots. A robot is an AI system that receives signals from the environment and acts on the environment in a way that helps it to achieve some preestablished goals. In what he called the subsumption architecture for mobile robots, Brooks distinguished eight levels of competence, each with increasingly more sophisticated goals and means for achieving them:

  1. Avoiding. Avoid contact with other objects, either moving or stationary.

  2. Wandering. Wander around aimlessly without hitting things.

  3. Exploring. Look for places in the world that seem reachable and head for them.

  4. Mapping. Build a map of the environment and record the routes from one place to another.

  5. Noticing. Recognize changes in the environment that require updates to the mental maps.

  6. Reasoning. Identify objects, reason about them, and perform actions on them.

  7. Planning. Formulate and execute plans that involve changing the environment in some desirable way.

  8. Anticipating. Reason about the behavior of other objects, anticipate their actions, and modify plans accordingly.
Each of these levels depends on and subsumes the competence achieved by the earlier levels. Each level responds to signs, signals, or stimuli from the input sensors and generates output for the motor mechanisms. Yet the robot as a whole does not depend on a strict control hierarchy. The first few levels by themselves could support an insectlike intelligence that responds directly to immediate inputs without doing abstract reasoning or planning. The higher levels could inhibit the lower levels and take control for more sophisticated or intelligent behavior, but the lower levels would still be capable of automatic, reflexlike reactions to danger signals.

The behavior of the lower levels depends primarily on immediate inputs. The higher levels depend more heavily on internal representations, such as maps of the environment, memories of previous inputs, stored patterns for recognizing familiar objects, and established habits for repeatable behaviors. Every level responds to signs from the external environment and from other internal levels, but there is an increase in complexity from the automatic responses at the lower levels to the knowledge-based reasoning at the higher levels.

Artificial Psyches

Aristotle's levels may help to clarify and refine the competence levels. Nutrition, which Brooks omitted, is necessary for a robot to recharge its batteries; and desire or something like it is necessary to determine goals for the robot at every level, from the most primitive nutrition to the most sophisticated planning.

What distinguishes a software agent from an ordinary program is a unifying principle that gives it a certain autonomy. Following Aristotle, that principle may be called its psyche, and its definition can be based on an appropriate definition of each of its functions. The six functions of the psyche, which Aristotle applied to living things from plants and insects to humans, can serve as metaphors for the functions of artificial agents:

The notion of psyche with its hierarchy of functions provides a framework for classifying agentive behavior. The psyche of an agent is its functional organization, and its level of sophistication depends on how much of the Aristotelian range of function it is able to support. A formal definition of the term agent might be based on a formalization of the informal hierarchies proposed by Aristotle and Brooks. But such a formalization would require a complete axiomatization of all the top-level concepts in the ontology, which Peirce said is a "labor for generations of analysts, not for one."

For further discussion about continuous processes, discrete processes, and causal influences, see the paper on Processes and Causality.


Send comments to John F. Sowa.

  Last Modified: