Meta-Programming in Prolog – Part 1

Introduction

Meta-programming is part of the folklore in Prolog, and is in general a rather old concept with roots tracing back to at least the 50’s. To give a definition that captures all the relevant concepts is outside the scope of this introductory text, but I shall at least provide some pointers that’ll be useful later on. Programs are useful in many different domains. We might be working with numbers, with graphs, with lists or with any other data structure. What happens when the domain is another programming language? Well, nothing, really, from the computer’s point of view there’s no difference between this scenario and the former. But conceptually speaking we’re writing programs that are themselves working with programs. Hence the word “meta” in meta-programming. A compiler or interpreter is by this definition a meta-program. But in logic programming we’re usually referring to something more specific when we’re talking about meta-programming, namely programs that takes other logic programs as data. Since Prolog is a homoiconic language there’s also nothing that stops us from writing programs that takes other Prolog programs as data, but even though there’s a subtle distinction between this and the former scenario they are often referred to as one and the same. So, to summarize, when we’re talking about meta-programs in logic programming we’re quite often referring to Prolog programs that uses logic programs as data.

The road map for this post is to see some examples of meta-interpreters in Prolog. Then we’re going to use the interpreters to aid program development with a technique known as algorithmic debugging. But enough talk, let’s do this!

Meta-interpreters

There’s still ample room for confusion regarding the word “meta” in meta-interpreter. I shall use the word whenever I refer to an interpreter for a logic programming language, even though this is not factually correct since one usually demands that the object language and the meta language are one and the same. That is: we write an interpreter for Prolog in Prolog. There are good reasons for not doing this. Prolog is a large and unwieldy language with many impure features such as cut, IO, assert/retract and so on, and when we’re working with meta-interpreters we’re often only interested in a small, declarative part of the language. Hence we shall restrict our focus to a programming language akin to pure Prolog which is basically just a set of Horn clauses/rules.

Even though we still haven’t decided the syntax for the object language we know that we must represent at least two things: facts and rules. Since a fact A is equivalent to the rule A \leftarrow true we can store these in the same manner. Assume that P is a definite logic program. How should we represent it? As a list or a search tree? This could be a good approach if we were interested in implementing dynamic predicates in a declarative way, but since P is static it’s much easier to just use the database and store everything as facts. For every rule A \leftarrow B_1, ..., B_n \in P, represent it as the fact rule(A, [B_1, ..., B_n]). If a rule only has the single atom true in its body, i.e. it is a fact, then the second argument is the empty list. Obviously this is just one of many possible representations, but it’s simple to implement and work with.

As an example, here’s how we would write append/3:

rule(append([], Ys, Ys), []).
rule(append([X|Xs], Ys, [X|Zs]),[append(Xs, Ys, Zs)]).

Simple, but not exactly pleasing to the eye. Fortunately it’s easy to add some syntactic sugar with the help of Prolog’s term expansion mechanism. Instead of directly using rule/2 we can rewrite append/3 as:

append([], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
    append(Xs, Ys, Zs).

And then define a suitable expansion object so that we end up with a set of rule/2 facts. This is a rather mundane and not very exciting programming task and hence omitted. Now on to the interpreter. It will be defined by a set of prove/1 clauses where the single argument is a list of goals. If you’ve never seen a meta-interpreter in Prolog before, you’re probably in for some serious disappointment since the program is so darn simple. So simple that a first reaction might be that it can’t possibly do anything useful. This first impression is wrong, however, since it’s easy to increase the granularity of the interpreter by implementing features instead of borrowing them from the Prolog system.

As mentioned the interpreter takes a list of goals as argument. This means that there’s a base case and a recursive case. In the base case of the empty list we are done. In the recursive case we have a list of the form [G|Gs] where G is the first goal that shall be proven. How do we prove G then? By looking if there’s a corresponding rule rule(A, [B_1, ..., B_n]) where A and G are unifiable with mgu \theta and recursively prove ([B_1, ..., B_n|Gs]) \theta. In almost any other language this would be considerable work, but since Prolog is a logic programming language we already know how to do unification. Thus we end up with:

%Initialize the goal list with G.
prove(G) :-
   prove1([G]).

prove1([]).
prove1([G|Gs]) :-
   rule(G, B),
   prove1(B),
   prove1(Gs).

This is a prime example of declarative programming. We’ve only described what it means for a conjunction of goals to be provable and left the rest to the Prolog system. If you’re unsure why or how the interpreter works I urge you to try it for yourself.

Extensions

To prove that I wasn’t lying before I shall illustrate some neat extensions to the bare-bone interpreter. Strictly speaking we don’t really need anything else since the language is already Turing complete. It’s e.g. trivial to define predicates that define and operate on the natural numbers. For example:

nat(zero) :- true.
nat(s(X)) :- nat(X).

add(zero, Y, Y) :- true.
add(s(X), Y, s(Z)) :-
   add(X, Y, Z).

But since these operations can be implemented much more efficiently on any practical machine it’s better to borrow the functionality. Hence we shall define a set of built-in predicates that are proved by simply executing them. The easiest way is to add a rule/2 definition for every built-in predicate.

rule(rule(A, B), []) :-
    rule(A, B).
rule((X is Y), []) :-
    X is Y.

Why the first clause? So that we can facilitate meta-programming and use rule/2 in our object language. I mentioned earlier that the interpreter as defined is not really a meta-interpreter in the strict sense of the word, and that Prolog is such a large language that writing meta-interpreters for it is probably not worth the hassle. But now we have a very restricted yet powerful language. Can we write a real meta-interpreter in that language? Yes! Actually it’s hardly any work at all since we already have the source code for the old interpreter.

prove(G) :-
    prove1([G]).

prove1([]) :- true.
prove1([G|Gs]) :-
    rule(G, B),
    prove1(B),
    prove1(Gs).

Glorious. Perhaps not very practical, but glorious.

Building a proof tree

When our interpreter gives an answer it doesn’t provide any indication as to why that answer was produced. Perhaps the answer is in fact wrong and we want to localize the part of the code that is responsible for the error. The first step in this process is to build a proof tree. A proof tree for a goal \leftarrow G and logic program P is a tree where 1) the root is labeled G, and  2) each node has a child for every subgoal with respect to P. Hence the proof tree is more or less a representation of a sequence of trace steps.

It might sound like a complex task, but it’s really not. All we need is to extend the prove/1 predicate with an additional argument for the proof tree. In the base case of the empty list the tree contains the single node true. If [G|Gs] are the current goals then we prove G and Gs and builds a proof tree from the recursive goals.

prove(G, T) :-
    prove1([G], T).

prove1([], true).
prove1([G|Gs], ((G :- T1), T2)) :-
    rule(G, B),
    prove1(B, T1),
    prove1(Gs, T2).

And when called with G = append([a,b], [c,d], Xs) the resulting tree looks like this:

?- interpreter::prove(append([a,b], [c,d], Xs), T).
Xs = [a, b, c, d],
T = ((append([a, b], [c, d], [a, b, c, d]):- (append([b], [c, d], [b, c, d]):- (append([], [c, d], [c, d]):-true), true), true), true)

NB: this tree has a lot of redundant true entries. How can we fix this?

Summary

We’re now able to build proof trees. In the next entry we’re going to use them to localize errors in logic programs.

For a good discussion of meta-interpreters in Prolog the reader should turn to The Craft of Prolog by Richard O’Keefe. This post was just the tip of the iceberg. Another interesting subject is to experiment with different search rules, and for this I shamelessly promote my own bachelor’s thesis which is available at http://www.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:325247.

Source code

The source code is available at https://gist.github.com/1330321.

Leave a Comment