A Prolog to Prolog

John F. Sowa

3. Procedural Prolog

Programming style in Prolog is strongly influenced by the pure Prolog subset. In fact, all of the examples so far have used only this subset, except for comparison predicates like ge and arithmetic operators like (X := A + B). But to make Prolog run on conventional computers, the underlying inference engine imposes a strict order of execution. For purely declarative rules, that order can often be ignored. For complex applications, however, that order can have a major effect on performance. A Prolog programmer should write declarative rules, but should also be aware of the way those rules are executed by the inference engine.

Backtracking and Cuts

Two features distinguish Prolog from conventional programming languages: the unification algorithm for pattern matching, and the backtracking algorithm for executing rules. Backtracking can be viewed as a generalization of ordinary procedure calls. It allows rules that have been called and exited to be reactivated at some later time. The following diagram illustrates the flow through a conventional procedure: .in4 .sp .in0

                |                          |
     Call 컴컴>|  Conventional Procedure  궂컴컴> Exit
                |                          |
The arrows in this diagram indicate flow in only one direction. The procedure is activated by a call and terminated by an exit. After it exits, there is no way that it can be reactivated except by another call. Many systems allow programs to be interrupted. Then the system saves the state of the current activation, performs some higher priority task, and reactivates the suspended procedure. But one thing that conventional systems never do is to reactivate a procedure that has already run to completion and exited normally.

In Prolog, the rules and facts that define a predicate behave like a procedure. They can be called by a goal and can exit successfully like a conventional procedure. But backtracking adds another way of exiting and another way of reentering. The next diagram shows the flow through the Prolog clauses that define some predicate: .in4 .sp .in0

     Call 컴컴>|                         궂컴컴> Exit
                |  Set of Prolog Clauses  |
     Fail <컴컴캅                         |<컴컴 Redo
The top line shows the ordinary flow through a rule that terminates successfully. But if a goal fails, control does not pass to the next goal to the right. Instead, the goal terminates by failure, and the previous goal on the left is reactivated by a redo. To illustrate the possible entries and exits, consider the following two rules that define the predicate p:
p(X) <- q(X) & r(X) & s(X) & t(X).
p(X) <- u(X) & v(X).
The first rule is called when the head p matches some goal. Then the predicates in its body are called from left to right. If they all succeed, the clause is exited in the normal way. Suppose, however, that predicate s happened to fail. Then control does not pass to t, but to the redo entry for r. Another choice is taken for r, and s is called again. If s fails once more, control passes back to r. If no more options are available for r, then r also fails and control passes back to a choice point for q. If all choices for q fail, then the whole rule fails and control passes to the second rule that defines p. If the second rule fails, then the original goal that called p also fails.

Each option in a rule creates a choice point, a point where control returns when a rule is reentered by a redo. There are two ways of specifying options: multiple rules (or facts) with the same head predicate or the | operator in the body of some rule. The more basic way is by multiple rules, since every use of the | operator can be replaced by multiple rules. The following rule, for example,

a <- b & (c | d) & e.
is equivalent to the pair of rules
a <- b & c & e.
a <- b & d & e.
Although the | operator can always be eliminated, it may simplify the statement of the rules. In some cases, it can also improve their execution speed. In the pair of rules above, for example, the predicate b must be executed a second time if c fails. If b had side effects such as input/output, the final results might be different.

In the basic Prolog inference method, choice points result from multiple rules. The | operator behaves as though it were defined by the following two rules:

(P | Q) <- P.
(P | Q) <- Q.
The first rule says that to execute a goal of the form (P | Q), first do P. If P succeeds, then (P | Q) succeeds without any need to try Q. But if P fails (or some later failure forces a redo), then go to the second rule, which tries Q. In these rules, P and Q are metavariables, whose values are predicates rather than ordinary data structures. These rules are sufficient as long as there are no complications introduced by cuts. In order to handle those complications, IBM Prolog provides | as a built-in operator.

Sometimes the choice points may not be obvious because they are buried in the rules for some other predicate. The next rule, for example, has no explicit choice points:

woman(W) <- human(W) & female(W).
Although there are no alternatives in this rule, the predicate human may be defined by a long list of facts in the workspace. If the goal is woman(X), the first value found by the finder goal human(W) might be W=socrates. Since female(socrates) fails, the system backtracks to the redo point for human to find W=cicero. Since female(cicero) also fails, it keeps trying until it finds some value for W that makes both predicates true.

In some cases, backtracking can be a highly efficient way of searching. In other cases, it may cause a great deal of overhead. Prolog provides a way of controlling the search process by means of a built-in operator called cut, written /. The cut operator commits Prolog to the current path in the search process by throwing away choice points. Thus it cuts out the remaining part of the search space that would otherwise result from the open choice points. However, a careless use of cuts can destroy the logical integrity of the computation. Consider the following example:

woman(W) <- human(W) & / & female(W).
Here, the goal woman(X) succeeds with X=mary. But consider the next example:
woman(W) <- human(W) & / & female(W).
Now human(W) finds W=george. Then the cut throws away the choice point leading to human(mary). But now female(george) fails. Since the choice point is gone, the system cannot redo the human predicate to find another value for W. Therefore the entire rule fails. As this example shows, cuts can destroy the declarative reading of a program. Following are three reasons for using cuts:

  • If a goal is known to have at most one solution, a cut following the goal may prevent unnecessary backtracking.

  • Doer goals that have side effects (such as input/output) may cause erroneous results if executed more than once; a cut can prevent them from being repeated by backtracking.

  • When an error occurs in a deeply nested computation, a cut may be necessary as part of the recovery process.

To illustrate backtracking, the count predicate keeps incrementing its argument each time backtracking returns to it:

count(I) <- count(J) & (I := J + 1).
The first line starts the count at 0. Then the recursive rule adds 1 to the value computed by the previous call. To see how it works, type the following goal at the keyboard:
count(I) & write(I) & fail.
At the first call, count(I) matches the first rule with I=0. Then the write predicate prints 0 on the screen. The fail predicate takes no arguments and always fails when it is called. The failure causes backtracking to go to the second rule, which calls count to compute J=0. Then I=1, and write prints 1 on the screen. The predicate fail forces backtracking to return to the subgoal count(J), which computes J=1, whereupon I=2. As a result, the system keeps printing all the integers 0, 1, ... until the user causes an interrupt (which can be done by typing sp.).

To keep counting from going on forever, some limit N is necessary. But a cut is also needed to keep backtracking from redoing the previous rules. The two-argument version of count calls the one-argument version to do the counting, but it uses the second argument as a limit:

count(I,N) <- count(I) & (gt(I,N) & / & fail | true).
As long as I is less than or equal to N, the second option in this rule calls the predicate true, which always succeeds. As soon as I becomes greater than N, the cut is executed to prevent backtracking from redoing the one-argument count goal. Then fail causes a failure that stops the process. Because of the limit, the following goal only prints the integers up to 5:
count(I,5) & write(I) & fail.

Cuts in Prolog are like go-to statements in conventional languages: they are very powerful, but they may destroy the structure of the program and make it difficult to read and maintain. The best way to use cuts is to define better structured operators in terms of them. Then use those operators instead of the more primitive cuts. Two operators defined in terms of cuts are the negation ^ and the if-then operator ->. Following is a definition of negation:

^P <- P & / & fail.
The first rule says that to prove ^P, first try P. If P succeeds, do the cut that throws away the choice point to the second rule. Then the goal fail forces a failure. Since the pointer to the second rule is gone, there is nothing else to do, and the original goal ^P fails. Therefore ^P fails if P succeeds. On the other hand, if P fails, the choice point for the second rule is still available. The second rule does nothing and always succeeds. Therefore ^P succeeds if P fails. In IBM Prolog, the ^ operator is built-in, and it does some additional checking to handle cases where the goal P itself does cuts that may affect the success or failure of ^P.

The if-then operator is represented by a right pointing arrow. The combination (P -> Q; R) may be read "If P then Q else R." It has the following definition:

(P -> Q; R) <- P & / & Q.
(P -> Q; R) <- R.
If P succeeds, the cut throws away the choice point for the second rule. Then Q is executed; Q may succeed or fail, but backtracking will never cause R to be executed after P has succeeded. On the other hand, if P fails, the second rule is performed to execute R. As with the definition of ^, the system actually does more checking than this definition shows in order to handle cases where P itself contains cuts.

Logically, the if-then operator is not necessary, since it could be simulated with a construction of the form (P & Q | ^P & R). However, this form is less efficient, since it executes P twice. It could also give different results if P had side effects. As an example of the use of if-then, consider the notin predicate, which is the negation of member:

notin(X,L) <- ^member(X,L).
This rule says that X is not in the list L if it is false that X is a member of L. With the cut operator, it is possible to give a more basic definition of notin:
notin(X,X.Tail) <- / & fail.
notin(X,Head.Tail) <- notin(X,Tail).
The first rule says that for any X, X is not in the empty list. The second rule says that if X matches the head of the list, discard the choice point and force an immediate failure. Then the third rule says that if X is not equal to Head (since the choice point must not have been discarded), check whether &X is not in Tail. Using the if-then operator leads to a clearer definition:
notin(X,Head.Tail) <-
    (X=Head -> fail; notin(X,Tail)).
The second rule says that if X equals the head, then fail; else check whether X is not in the tail.

Repeated if-then operators can be used to simulate a case statement in other languages. For example, the following rule determines pet sizes:

pet_size(P,S) <- (P=mouse    -> S=small;
                  P=cat      -> S=normal;
                  P=dog      -> S=normal;
                  P=horse    -> S=big;
                  P=elephant -> S=enormous;
This predicate, however, can be defined much more simply by just a collection of facts:
pet_size(mouse, small).
pet_size(cat, normal).
pet_size(dog, normal).
pet_size(horse, big).
pet_size(elephant, enormous).
pet_size(&asterisk;, unknown).
The case construction may be useful when the test predicate is complex, but as this example illustrates, Prolog does an implicit case check in deciding which rule to invoke. That implicit check is usually more efficient than a case check inside the body of a rule. However, the case construction might be useful to avoid problems with backtracking. Suppose, for example, that P=dog, but the predicate were followed by some other goal that might fail. In that case, backtracking might lead to pet_size(&asterisk;,unknown), which would give the size unknown for the dog. To avoid that backtracking, cuts could be added to all the clauses but the last:
pet_size(mouse, small) <- /.
pet_size(cat, normal) <- /.
pet_size(dog, normal) <- /.
pet_size(horse, big) <- /.
pet_size(elephant, enormous) <- /.
pet_size(&asterisk;, unknown).
Each of the conditions has just a single cut. That cut has no effect on the success or failure of the clause in which it occurs, but it prevents backtracking from leading to later clauses. However, this proliferation of cuts is not desirable. They should be avoided by using the if-then operator if backtracking might cause a problem.

Saving Computed Values

Unification binds values to variables, but backtracking unbinds the old values before finding new ones. The automatic backtracking, which can be powerful for certain operations, sometimes erases results that are needed later. There are two basic ways of preserving a result to keep it from being erased by backtracking:

  • Cause some nonlogical side effect that backtracking cannot erase.

  • Use the compute or setof predicates, which accumulate a list of results.

To illustrate a side effect for preserving intermediate results, consider the Fibonacci series 0, 1, 1, 2, 3, 5, 8, ..., where each number is the sum of the previous two. Following is a recursive predicate for computing the Nth number in the series:

fib(N,X) <-  fib(? N - 1, X1) & fib(? N - 2, X2)
          &  X := X1 + X2.
The first two facts state the starting values for the series. Then the general rule computes X by calling itself recursively to compute the values for (N - 1) and (N - 2). Note the use of the evaluation operator ? to evaluate the arguments before calling fib. Since each call to the general rule for fib makes two recursive calls, the amount of computation increases rapidly. In fact, it increases as the Fibonacci series itself: for N=30, X=832040. In principle, the value of X for N=30 should take only 29 additions; to make 832040 recursive calls is an enormous amount of overhead. One way to improve performance is to add an extra argument to fib that saves the previous value (see Exercise 19). Another way is to use the built-in addax predicate to assert a new fact:
fib(N,X) <-  fib(? N - 1, X1) & fib(? N - 2, X2)
          &  X := X1 + X2
          &  addax(fib(N,X),1).
The last line of this rule adds the new fact fib(N,X) to the workspace. The second argument of addax is used here to specify that the new fact should be placed in position 1 of the clauses for the predicate fib. Without that argument, the new fact would be placed after all of the other clauses, and the recursive rule would still be invoked.

To compute fib(19, 4181), the first version took 108 milliseconds on an IBM 3090, but the version with addax took only one millisecond. Without addax, the Fibonacci series could not be computed for values greater than N=19 without increasing the amount of temporary stack space for Prolog goals. But with addax, the execution time for N=100 was 6 ms. Once all the values have been asserted by means of addax, the answer to each future question can be found in less than a millisecond. This technique is useful for mixing computation with table look-up. For more complicated problems, there may be no simple linear algorithm, and addax can be valuable for avoiding duplicate computations. Game playing programs, for example, frequently reach the same positions through multiple paths. They can often run much faster if they save each position that has been evaluated. This is a typical trade-off of space versus time: chess programs on large machines save previous values, but programs on small machines do not have the space.

The addax predicate has a variety of uses. For example, the predicate married is supposed to be symmetric. But it may happen that one fact married(zeus,hera), but not the converse married(hera,zeus), has been asserted. The following goal searches for all nonsymmetric pairs and asserts the missing fact:

married(X,Y) & ^married(Y,X) &
addax(married(Y,X)) & fail.
The first subgoal married(X,Y) serves as a finder goal to retrieve some married pair. If the converse has also been asserted, the goal ^married(Y,X) fails and forces backtracking to find another pair. If the converse has not been asserted, addax asserts it, and fail forces backtracking to find the next married pair. The process continues until all married pairs have been checked.

Side effects can also accumulate a list or set of results that are generated by some goal. Suppose, for example, that one wanted to find all pairs of siblings in the family database. The goal sibling(X,Y) would find the first pair; then a semicolon from the keyboard would find the second pair. For each pair, a semicolon would be necessary to force the inference engine to backtrack and find another pair. Another way to force backtracking is to use the built-in fail predicate, as in the following goal:

sibling(X,Y) & write(X.Y) & fail.
First, sibling finds one pair of siblings, and write prints out that pair. Then fail forces backtracking to the most recent choice point. Since there are no choices for write, backtracking continues to the redo entry for sibling and searches through the list of facts for child to find another pair who have a common parent. Then write prints out that pair, and fail forces another backtrack. When all pairs of siblings have been printed out, the goal fails.

Since sibling is a symmetric predicate, all pairs are printed twice, once for sam.mary and again for mary.sam. The extra pair can be suppressed by using the less-than predicate, which forces them to appear in alphabetical order:

sibling(X,Y) & lt(X,Y) & write(X.Y) & fail.
Since lt(mary,sam) is true, that pair is printed, but the pair sam.mary is rejected.

Even with the lt predicate, there may still be duplicate data. If Sam and Mary have only one common parent, they are printed only once in the form mary.sam. But if they have two common parents, backtracking finds both parents and prints the same pair twice. The setof predicate avoids that problem. It accumulates a list of results, sorts them, and removes duplicates. To find all pairs of siblings in the database, issue the following goal:

setof(X.Y, sibling(X,Y) & lt(X,Y), L).
The first argument is the pair X.Y whose values are sought; the second argument is the finder goal sibling(X,Y) & lt(X,Y); and the third argument is the list L, which will accumulate all pairs X.Y that make the second argument succeed. In general, the first argument of setof is a single variable or a structure of variables whose values are to be found; the second argument is some finder goal that finds values for the unbound variables in the first argument; and the third argument is a variable that accumulates a list of all instances of the structure in the first argument for which the goal succeeds.

The compute predicate in IBM Prolog is more general than setof, which is included primarily for compatibility with the Edinburgh version. One limitation of setof is the way it handles the empty set. Suppose, for example, that some predicate p(X) had no solutions for X. In that case, the goal setof(X,p(X),L) would fail. But for some purposes, it would be better to succeed with the list L having the value nil. To generate that result, a new predicate set_of may be defined in terms of compute:

set_of(X,P,L) <- compute(set,X,P,nil,L) & /.
The first rule calls compute to find the set of all X that satisfy P and accumulate them in L. Up to that point, it behaves like setof. But if compute fails to find any solution, the second rule returns nil as the answer. The cut symbol / at the end of the first rule prevents backtracking from ever backing into the second rule. This version of the set_of predicate also differs from setof in its treatment of unbound variables in P, but that issue will be discussed in more detail in Chapter 3. For the examples in this chapter, both predicates work equally well, but set_of will be used to maintain consistency with the later chapters, where the difference is sometimes important.

Readers who are familiar with relational databases should note that setof and set_of have a strong similarity to the SELECT statement in the SQL query language. The three arguments of setof correspond exactly to the three parts of a database query:

  1. A list of fields in the database whose values are sought.

  2. Some condition or logical combination of conditions that must hold between those values.

  3. The place to put the results, which may be the screen, a file, or some intermediate storage area.
This similarity should not be surprising, since both Prolog and relational database theory are based on symbolic logic. In fact, database relations correspond to Prolog predicates that are defined solely by lists of facts. The main difference is that database relations typically contain large volumes of data that are stored on external disks, whereas Prolog predicates are held in main storage. Virtual relations or "views" in database theory correspond to Prolog predicates that are defined by rules.

Searching a State Space

Planning, problem solving, and game playing techniques in artificial intelligence are commonly cast as searches through a state space. Each state in the problem is represented as a data structure, permissible moves between states are defined by purely declarative rules, and the search procedure is defined by a problem-independent set of rules. Exercises 6 through 10 at the end of this chapter can all be handled by state-space searches:

  • For the farmer, wolf, goat, and cabbage problem, the state is represented as the structure loc(F,W,G,C), which shows the locations of each of the four critical items. Rules for the possible moves are described starting on page &xfarmer;.

  • For the water jug problem, the state is a pair of numbers (A.B) representing the amounts of water in each jug. Possible moves are described on page &xjug;.

  • For the missionary and cannibal problem, the state is a five-tuple B.M.C.M2.C2, as described on page &xmission;.

  • For the monkey and bananas problem, the state is a triple M.B.P, as described on page &xmonkey;.
For each of these problems (and many more like them), problem-dependent information is embodied in a predicate move(Current,Next), which specifies possible moves from a current state to a next state. The search procedure uses this predicate as a finder goal that finds a new state Next when given a state Current. The task is to find a sequence of safe, permissible states from the start to the goal. Ideally, the procedure should minimize the amount of computing time needed to find the solution.

The simplest search procedure to implement in Prolog is a depth first, backtracking search that takes advantage of the built-in mechanisms. The predicate

depth_first(Goal, History, Answer)
is a recursive procedure with two inputs, Goal and History, and an output Answer that represents the solution. The variable Goal contains a single state; History is a list of states that have already been traversed, with the current state at the head; and Answer is a list of states from start to goal. The search can be defined by two rules:
depth_first(Goal, Goal.&asterisk;, Goal.nil).
depth_first(Goal, Current.Past, Current.Future) <-
    move(Current,Next) &
    ^member(Next,Past) &
    depth_first(Goal, Next.Current.Past, Future).
The first rule says that if the goal state is identical to the current state (the head of the history list), then the problem is solved: the answer is just a list of one state (Goal.nil). The second rule uses the pattern Current.Past to split the history list into the Current state and a list Past of previous states. The third argument Current.Future returns the answer as a list with the current state at the head followed by the future list to be computed. In the body of the rule, the goal move takes one step from Current to another state Next. Then the test ^member(Next,Past) checks the history list Past to make sure that Next has not already been visited. Finally, depth_first calls itself recursively to compute Future with the state Next at the head of the history list.

If the farmer is crossing from the west side of the river to the east, the goal is loc(east,east,east,east), and the initial history list has the single starting state, loc(west,west,west,west). The following goal triggers the computation:

            loc(west,west,west,west).nil, Answer).
For this goal, the depth first search computes the following answer list. The first state is the start, and the last state is the goal; each intermediate state shows one step along the way.
loc(west,west,west,west) . loc(east,west,east,west) .
loc(west,west,east,west) . loc(east,east,east,west) .
loc(west,east,west,west) . loc(east,east,west,east) .
loc(west,east,west,east) . loc(east,east,east,east) .
As this list shows, the first step moved the farmer and the goat from west to east; then the farmer returned by himself. The eighth state is the goal. This list shows only the successful moves from start to goal. A trace of the move predicate would show a number of false steps that caused one of the later predicates to fail: either the Next state failed the ^member check, or it led to a dead end from which depth_first could not find a solution. The reader should try this example at the keyboard, tracing one predicate at a time: depth_first, move, or member. Tracing all three predicates simultaneously would tend to obscure the process with a large volume of output.

A depth first search maintains a single history list at all times. When one branch of the search reaches a dead end, backtracking automatically prunes the tree up to an active choice point. When the system resumes the search from that point, the history list grows along the new path. Automatic pruning minimizes the amount of storage space required. Storage efficiency is one of the major strengths of a depth first search. One of its weaknesses, however, is that the path it finds may not be the shortest: since it pursues each branch as far as it can before trying any other, it may find a lengthy solution along one branch when some other branch might have been much shorter. In some cases, it may even continue forever in a fruitless search while a neighboring branch may have had a short path to the solution.

A breadth first search keeps track of all possible paths instead of just the current history list. As its second argument, the predicate breadth_first maintains a list of histories, each of which is a list of states. It is defined by two rules, which are slightly more complex than the depth first search. The first rule below is the stopping rule that checks whether the goal has been reached. It is more complex than the stopping rule for the depth first search for two reasons:

  1. Since Histories is a list of lists, the test for the goal state must use the member predicate to scan through the list.

  2. Whereas the depth first rules built up the answer list in the correct order, this rule must reverse the history list selected by member. (Note that the depth first rules use the appending technique for building up the answer list; compare them to the rules for append on page &xappend;.)
breadth_first(Goal, Histories, Answer) <-
    member(Goal.Past, Histories) &
    reverse(Goal.Past, Answer).

breadth_first(Goal, Histories, Answer) <-
    Histories=(&asterisk;.&asterisk;) &
           member(Current.Past, Histories) &
           move(Current,Next) & ^member(Next,Past),
                               New_history_list) &
                                Clean_list) &
    breadth_first(Goal, Clean_list, Answer).
The second rule uses the set_of predicate to find the set of all paths that extend the search one step farther. The subgoal Histories=(&asterisk;.&asterisk;) merely checks that the list is not nil; this test causes a slight improvement in performance, but it is not essential, since the member predicate would fail if Histories were nil. After that check, set_of finds the set of all lists of the form Next.Current.Past where Current.Past is one of the history lists, there is a permissible move from Current to Next, and Next is not a member of the list Past. All such lists are accumulated as a list of lists in New_history_list. Since the same state can often be reached by multiple paths, the predicate remove_duplicate_head removes all redundant paths in the history list. This check makes a major improvement in performance, but it makes it impossible to use this predicate to find more than one solution. Finally, breadth_first calls itself recursively to continue the search for another step.

The predicate that removes duplicate heads from a list of lists depends on the fact that set_of produces a sorted list. Therefore all duplicates can be removed in a single sweep that compares the heads of adjacent sublists.

remove_duplicate_head(F.S.Tail,Clean) <-
     ((F=(Head.&asterisk;) & S=(Head.&asterisk;)) ->
           (remove_duplicate_head(S.Tail,L) &
 rdh(F.S.Tail,Clean) <-
     ((F=Head.*) & S=(Head.*)) ->
     rdh(F,Tail,Clean) ; (rdh(S.Tail,L) & Clean=(F.L))).
The variables F and S match the first and second lists in a list of lists. If both F and S have the same head, the S list is discarded; and the rule calls itself to remove duplicates from the list F.Tail. If they have different heads, it saves both F and S and calls itself to remove duplicates from the list S.Tail.

Since the breadth first search extends all paths one step at a time, it is guaranteed to find a shortest solution. But since it preserves all paths (or at least all nonredundant paths), it uses up more storage space. Many implementations of Prolog (including IBM Prolog) use structure sharing to save space: when the system builds a new list, it moves pointers whenever possible instead of copying the lists. With such systems, the depth_first predicate will use the minimum amount of storage that is logically necessary to do the search. Although Histories appears to be a list of lists, it is actually stored as a tree where common sublists are shared. In particular, the starting state, which is the root of the tree, is common to all of the sublists.

For the problem with the farmer, wolf, goat, and cabbage, the state space is small, and both the depth-first and the breadth-first algorithms find the shortest solution, which takes 7 moves from start to goal. The depth-first search is slightly faster, since it requires less bookkeeping. But for the monkey and bananas problem, the depth-first search finds an 8-move solution where the monkey aimlessly pushes the box from one corner to another; the breadth-first search, however, finds the optimal 4-move solution. Besides finding a better solution, the breadth-first search makes a significant reduction in execution time: on an IBM 3090, the depth-first search takes 53 ms, while the breadth-first search takes 31 ms. Yet neither of these search strategies takes advantage of knowledge about the problem in order to guide the search. With a proper use of heuristics, it is possible to reorder the moves at each step and take the one that has the best chance of success. The resulting strategy, called a best-first search, can find the optimal solution to the monkey and bananas problem in only 6 ms. See Exercises 30 and 31 at the end of this chapter for suggestions on how to use heuristics to guide the search.


Except for differences in syntax, the pure Prolog subset is essentially the same in all versions of Prolog. But the built-in predicates, especially the ones for input/output, diverge from one version to another. This section will use the I/O facilities of IBM Prolog. The principles are similar for other versions, but the details may differ.

The following predicate, called farmer, uses IBM Prolog input and output to provide a user interface for the Farmer program. It prompts the user for the starting side, and it formats the output nicely.

farmer <-  prst(
'Please enter the starting side for all items.')
 & nl
 & prst('Type "east"  or  "west".')
 & nl & readat(S) & opp(S,G)
 & depth_first(loc(G,G,G,G), loc(S,S,S,S).nil,
 & system('clear') & nl
 & prst('Stages in transit:') & nl
 & prtans(Answer).
The farmer rule uses the following predicate
prtans(nil) <- nl &
 prst('Everybody''s safely across.') & nl .
prtans(loc(F,W,G,C).Tail) <-
   nl & prst('Farmer on the '||F)
 & nl & prst('Wolf on the '||W)
 & nl & prst('Goat on the '||G)
 & nl & prst('Cabbage on the '||C)
 & nl & prtans(Tail).
Whereas write has no choice point, read can be used with or without a choice point. For example, to read a file of Prolog terms and display it on the screen, we can do either display1(File) or display2(File), as defined in:
display1(File) <- open(File, input) &
    read_show1(File) & shut(File).

read_show1(File) <-
    read_no_choice(Term, File) & write(Term) &

read_no_choice(Term, File) <- read(Term, File, 1).

display2(File) <- open(File, input) &
    read_show2(File) & shut(File).

read_show2(File) <-
    read_choice(Term, File) & write(Term) & fail.

read_choice(Term, File) <- read(Term, File, 3).
open(File, input) <-
 dcio(File, input, file, File, prolog, a, f, 80) & /.

shut(File) <- dcio(File, close).
The predicate read_show1(File) works by calling itself recursively until there are no more terms to be read from the file, while read_show2(File) works by backtracking to a read with a choice point until the last term is read from the file. dcio is a general purpose predicate for dealing with files (see Appendix A for more details). Besides dcio, IBM Prolog includes other input-output predicates, which are described in detail in the manuals (IBM 1989a,b).

A common technique is to write a processing loop that does the following: write a prompt (such as tellme:); read an input; if the input is the word quit, exit from the loop; otherwise, process the input and repeat. The following rule does just that:

loop <- write('tellme:') & read(Input) &
        (Input=quit | process(Input) & loop).
In this case, read does not need to have a choice point. If Input=quit, the first option does nothing and the rule succeeds. Otherwise, the second option passes Input to the predicate process. When process finishes, loop executes the rule again.

We note that if we quit from the loop above we are still in the Prolog interpreter, and we can then set other goals. There is a built-in predicate called fin which has the effect of taking us out of the Prolog interpreter. It can be used like this:

loop <- write('tellme:') & read(Input) &
        (Input=quit & fin | process(Input) & loop).
Now a quit answer given to the program causes us to leave Prolog.

Our examples of looping using built-in predicates are quite procedural. While we prefer declarative programming where possible, a few fragments that can only be read procedurally do little harm, so long as they are small. In Chapter 3, we return to this subject, and look at ways of keeping the procedural fragments small and easy to understand.

String Handling

In IBM Prolog, strings can be concatenated using the || operator, and they can be converted to and from lists using the built-in st_to_li predicate. Concatenation of strings can be combined with arithmetic by using the := operator, as in

X := 'The elapsed time is '||((T+30)/60)||' minutes.'
Other transformations on strings are usually done by using st_to_li to convert to list form. The lists are transformed, element-by-element if necessary, and the result is changed back to a string, also by using st_to_li. For example the goal st_to_li('abc',&asterisk;) gives the answer
st_to_li('abc', a.b.c.nil)
and so does the goal st_to_li(&asterisk;,a.b.c.nil).

Another common example of string handling is to group a string of characters into a list of words or tokens. The predicate

takes a string of characters Input, and generates a list of words Tokens. As examples of how tokenize should work, consider the next five goals:
tokenize('Now is the time.', T1).
tokenize('Now    is the   time.    ', T2).
tokenize('abc123', T3).
tokenize('a(1,2,3)', T4).
tokenize('        ', T5).
Since extra blanks are ignored, both T1 and T2 should be the list
'Now' . 'is' . 'the' . 'time' . "." . nil
T3 should be the list of one word 'abc123'.nil, but T4 should be the list
'a' . "(" . '1' . "," . '2' . "," . '3' . ")" . nil
Since a blank string has no tokens, T5 should be nil. Following are the rules for tokenize:
tokenize(String,Tokens) <- st_to_li(String,List) &

tokenize1(L1,Word.Tokens,L3) <-
    token(L1,Word,L2) & /
    & tokenize1(L2,Tokens,L3).
The first rule for tokenize1 assumes that token will take the list L1, put the first token in the variable Word, and leave everything following Word in the list L2. Then tokenize1 calls itself recursively to tokenize L2 into the list Tokens with the remainder in L3. The cut / in the first rule for tokenize1 prevents unnecessary backtracking if some rule that calls tokenize happens to fail. If the first rule fails to find a word, the second rule gives nil as the list of tokens.

Various languages assume different rules for tokenizing. A common convention is that a token is either an alphanumeric string or a single nonblank character that is neither a digit nor a letter.

token(" ".L1,Word,L2) <- / & token(L1,Word,L2).
token(C.L1,Word,L2) <- (letter(C) | digit(C)) & /
    & alphanum(L1,Chars,L2) & st_to_li(Word,C.Chars).
The first rule matches any input list starting with a blank; it just ignores the blank and calls itself recursively to process everything after the blank. The second rule looks for either a letter or a digit as the first character; it calls alphanum to put the following alphanumeric characters in Chars; then it calls string to convert the list C.Chars into the string Word. If the first character is anything other than a letter, digit, or blank, the third rule treats it as a single-character word. The cuts in the first two rules prevent backtracking from reaching the third rule inadvertently. If a string is nil or all blank, the token predicate fails.

The predicate alphanum(L1,Chars,L2) takes a list of alphanumeric characters from the beginning of L1, puts them in Chars, and leaves the remainder of L1 as the list L2:

alphanum(C.L1,C.Chars,L2) <-
    (letter(C) | digit(C)) & / &
The first of these rules takes an alphanumeric character C from the head of a list and calls itself recursively. The second rule leaves nil as the list of characters if the head of the list is not alphanumeric. The remainder L2 may contain letters or digits, but its head cannot be alphanumeric.

The standard read predicate reads terms that obey the Prolog syntax. To read an arbitrary character string, the readch predicate must be used to read one character at a time. The readlist predicate invokes readch repeatedly to read an entire list:

readlist(C.L) <- readch(C) &
    (readempty & / & L=nil | readlist(L)).
The readempty predicate followed by cut stops recursion if the end of an input line has been reached. Otherwise, readlist calls itself recursively to continue reading characters.

The readtokens predicate invokes readlist to read a list of characters and tokenize1 to group that list into a list of words:

readtokens(Tokens) <-
    readlist(L) & / & tokenize1(L,Tokens,nil).
This rule uses the cut to prevent backtracking from deleting characters from the end of L. All the examples in this section make liberal use of cuts because tokenizing is a deterministic process that has no need of backtracking, but a later parsing stage may require a considerable amount of backtracking.

Changing Syntax

The default notation of a predicate followed by a list of arguments is not always the most readable. Fortunately, the Prolog syntax is easily extensible. That is, new operators can easily be defined. In the fact likes(sam,mary), the first argument is usually considered the subject, and the second one the object. But the infix form, sam likes mary, takes advantage of English word order to make the relationship obvious. Infix notation sometimes coincides with English, but it is not true English. It is simply a syntactic variant of the standard Prolog notation.

Infix notation depends on the op predicate, which defines the syntax of new operators. It has three arguments:

where Symbol is an atom that represents the operator, Assoc is its associativity, and Prec is its precedence. For example, before we can write sam likes mary instead of likes(sam,mary), we must declare likes to be an operator by saying op("likes",rl,50). Here, rl stands for right to left associativity. (In IBM Prolog double quotes normally enclose an atom, while single quotes signal a string. If an atom contains no special characters then the double quotes are optional.) For operators with equal precedence, associativity determines whether the operators group to the right or to the left (see the examples on page &passoc;). The second argument of the op predicate must be one of four possible strings: lr defines a left-associative infix operator, rl defines a right-associative infix operator, prefix defines a prefix operator, and suffix defines a suffix operator.

The op predicate may appear in goals and facts like any other predicate. Unlike other predicates, however, op has the side effect of changing the way Prolog parses inputs. Following are the declarations for some of the built-in operators:


A higher precedence means tighter binding. Since <- has precedence 10, | has precedence 20, and & has precedence 30, the expression p<-a&b|c&d has exactly the same effect as p<-((a&b)|(c&d)). For user-defined predicates that represent English verbs, a precedence of 50 is appropriate. That makes them tighter binding than the logical operators, but less tight than the list-forming dot operator. Therefore with precedence 50 for verbs, the following expressions would have the expected parsing:

fred likes cabbage & barbara likes hamburgers.
fred eats nuts.dates.figs.nil.
If the likes operator had a precedence less than 30 and the eats operator had a precedence greater than 100, Prolog would parse the above expressions as though they had been parenthesized as,
fred likes (cabbage & barbara) likes hamburgers.
(fred eats nuts).dates.figs.nil.
Most English verbs are right-associative. Therefore the expression,
walter says mary thinks sue believes fred eats cabbage
is parsed in the following way:
walter says (
    mary thinks (sue believes (fred eats cabbage))).
If the verbs had been left-associative, the following, erroneous parsing would be found:
(((walter says mary)
    thinks sue) believes fred) eats cabbage.
Prefix operators normally group to the right, and suffix operators group to the left. An operator symbol can have different precedence in its infix, prefix, and suffix forms.

An important use for infix notation is to express the arithmetic operators in a more readable form. IBM Prolog supports arithmetic expressions such as

X := -(4 + 5) / (7 - (2 &asterisk; 2)).
The result is -3, which is bound to X. The built-in operators include := for assignment, + for addition, / for division, - for subtraction, and &asterisk; for multiplication. Some of these symbols have other definitions: &asterisk; is also used as the anonymous variable or don't care symbol; / is used for rational numbers and as the symbol for the cut operator; and + and - are also used as prefix arithmetic operators. To avoid possible ambiguities, it is best to type a blank on either side of them when they are used as dyadic operators. The corresponding operator precedences are
Although one may call := an assignment operator, it differs from assignment in conventional languages, since it produces a binding that can be undone by backtracking.

Copyright ©2001 by John F. Sowa.

  Last Modified: