Archive | November 2011

# Meta-Programming in Prolog – Part 2

Here is the story thus far: a meta-program is a program that takes another program as input or output. Based on this idea we wrote an interpreter for a simple logic programming language and later extended it to build a proof tree. A proof of concept, if you will. Meta-interpreters have lost a lot of steam in the last years. The reason being that they are just too hard to write in most popular programming languages. There’s no a priori reason that prevents us from writing a meta-interpreter in e.g. Python or Java, but the truth is that it’s such a lot of work that it’s not worth the trouble in most cases. The only exception that I can think of are integrated development environments which typically have at least some semantic awareness of the object language. But these languages doesn’t have a simple core and makes parsing awkward to say the least. In logic programming the situation is different. If an interpreter supports definite Horn clauses — facts and rules — and built-in operations it’s powerful enough to run quite a lot of real programs.

So what’s the purpose then? Is meta-programming just a sterile, academic exercise that has no place in real world software development? Since that was a rhetorical question, the answer is no. A resounding no! First, meta-interpreters are great for experimenting with new language features and implementation techniques. For instance we could ask ourself if it would be worthwhile to add support for new search rules in Prolog instead of defaulting to a simple depth-first search. Implementing a new search rule in a meta-interpreter can be done in a few hours, and the resulting program won’t be longer than perhaps a page of code (unless you screwed up, that is). Doing the same task in an imperative programming environment could take days or even weeks depending on the complexity of the existing code base. So meta-programming is useful for prototyping. What else? It can actually be a great aid in debugging. In the following sections we’re going to explain what debugging means in logic programming and develop a simple but functional system for squashing bugs.

### Algorithmic debugging

Assume that we have a logic program $P$ and a goal query $\leftarrow G$. Sterling and Shapiro cites three possible bugs in The Art of Prolog:

1. The interpreter could fail to terminate.
2. The interpreter could return a false solution $G\theta$. (incorrectness)
3. The interpreter could fail to return a true solution $G\theta$. (insufficiency)

Since the first problem is undecidable in general we shall focus on the latter two. But first we need to decide what the words true and false means in this context, and in order to do that some remarks about the semantics of logic programs have to be made. If you’re feeling a bit rusty, I urge you to read up a bit on Herbrand models. Wikipedia and my own earlier post are both good starting points. The basic idea is fortunately rather simple. Logic formulas and programs can be viewed as specifications of models. A model is an interpretation in which the program is true. In general there are many, infinitely many, models of any given definite logic program. Which one should we choose? In a model we are free to reinterpret the non-logical vocabulary in any way we see fit. Consider the following logic program:

$natural(zero).$
$natural(s(X)) \leftarrow natural(X).$

It can be seen as a specification of either the set $\{natural(0), natural(1), \ldots\}$ or the set $\{natural(zero), natural(s(zero)), \ldots \}$. Notice the subtle difference. The latter model is simpler in the sense that it doesn’t take us outside the domain of the textual representation of the program itself. Such models are known as Herbrand models. Could we be so lucky that Herbrand models are the only kind of models that we need to pay attention to? This is indeed the case. If a logic program has a model then it also has a Herbrand model. But we still need to pick and choose between the infinitely many Herbrand models. The intuition is that a model of a logic program shouldn’t say more than it have to. Hence we choose the smallest Herbrand model as the meaning of a logic program. Or, put more succinct, the intersection of all Herbrand models. For a logic program $P$, let $M_P$ denote the smallest Herbrand model of $P$.

This is good news since we now know that every well-formed logic program has a meaning. Let’s return to the question of false solutions. This notion is only relevant if the programmer has an intended meaning that differs from the actual meaning of the program. In all but the most trivial programming tasks this happens all the time. An intended meaning $I_P$ of a logic program $P$ is the set of ground goals for which the program should succeed. Note the “should”. If we briefly return to $natural/1$, the intended meaning is nothing else than the actual meaning, i.e. the set $\{natural(zero), natural(s(zero)), \ldots \}$. With this terminology it’s possible to give a precise definition of incorrectness and insufficiency of a logic program $P$:

1. $P$ is incorrect iff $M_P \not\subseteq I_P$.
2. $P$ is insufficient iff $I_P \not\subseteq M_P$.

With these definitions we see that the $natural/1$ program is neither incorrect nor insufficient. But let’s introduce some bugs in it:

$natural1(\_).$
$natural1(s(X)) \leftarrow natural1(X).$

$natural2(zero).$
$natural2(s(X)) \leftarrow natural2(s(X)).$

Can you spot them? $natural1/1$ is incorrect since the base clause is too inclusive. $M_P$ is not a subset of $I_P$ since e.g. the element $natural(-1)$ is not a member of $I_P$. In the same vein, $natural2/1$ is insufficient since it’s equivalent to just $natural2(zero)$.

Quite a lot of legwork to explain something which is actually rather simple! What remains is to put everything in practice. Due to space constraints we’ll focus on the incorrectness problem.

### Incorrectness

A logic program $P$ is incorrect if it gives solutions that are not included in the intended model. In a real-world situation this means that the programmer has found a goal which the program should reject, but it doesn’t, and hence it contains at least one bug. The purpose is to find the part in the program that is responsible for the bug. In logic programming terms this is of course a clause. A clause $A \leftarrow B$ is false iff $B$ is true and $A$ is false. The purpose of the algorithm is to traverse the proof tree and find such a clause. With this in mind we can at least write the top-level predicate:

   false_solution(Goal, Clause) :-
%Build a proof tree.
interpreter::prove(Goal, Tree),
%Find a false clause.
false_goal(Tree, Clause).


Well, that wasn’t too hard. What about $false\_goal/2$? The tree is of the form $A \leftarrow B$. Hence there are two cases: either $B$ is false or it’s true. If it’s false, then we must continue the search in $B$. If it’s true, then the current clause is the clause that we’re looking for. To determine whether $B$ is false we need an auxiliary predicate, $false\_conjunction/2$, where the first argument is the conjunction of nodes and the second argument is the false clause (if it exists).

   false_goal((A :- B), Clause) :-
(    false_conjunction(B, Clause) ->
true
;    Clause = (A :- B1),
%Necessary since we don't want the whole tree.
extract_body(B, B1)
).


By the way, this is a fine example of top-down development. In each step we’re breaking the original problem into easier problems and assume that we’re able to solve them later. $false\_conjunction/2$ is a bit trickier. The first argument is a conjunction of nodes of the form $A \leftarrow B$. Just like before there are two cases since $A$ is either false or true. If it’s true, then we move on to the rest of the nodes. If it’s false, then we’d like to know whether $B$ is true or false. Luckily we’ve already solved this problem before — a call to $false\_goal/2$ will do the trick just fine.

   false_conjunction(((A :- B), _Bs), Clause) :-
query_goal(A, false),
!,
false_goal((A :- B), Clause).
%Almost the same case as above, but with only one element.
false_conjunction((A :- B), Clause) :-
query_goal(A, false),
!,
false_goal((A :- B), Clause).
false_conjunction((_A, As), Clause) :-
%A is implicitly true.
false_conjunction(As, Clause).


Only the most perplexing predicate remains: $query\_goal/2$. The second argument is $true$ if $A$ is true and $false$ if it’s false. How can we know this? This is where the programmer’s intended model enters the picture. For now, we’re just going to use her/him as an oracle and assume that all choices are correct. The predicate is then trivial to write:

   query_goal(G, Answer) :-
%Change later.
write('Is the goal '),
write(G),
write(, ' true?'),
nl,


In essence the user will be asked a series of questions during a session with the program. Depending on the answers, i.e. the intended model, the program will dive deeper and deeper into the proof tree in order to find the troublesome clause. As an example, here’s an append program where the base case is wrong:

append([_X], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
append(Xs, Ys, Zs).


And the session with the program would look like this:

[1]  ?- debugging::false_solution(append([a,b,c], [d,e], Xs), Clause).
Is the goal append([b,c],[d,e],[b,d,e]) true?
|: false.
Is the goal append([c],[d,e],[d,e]) true?
|: false.
Xs = [a, b, d, e],
Clause = (append([c], [d, e], [d, e]):-true)

And we clearly see that it’s the base case that’s wrong.

### Summary

The algorithm was taken from The Art of Prolog. Some simplifying assumptions have been made. Among other things there’s currently no support for built-in operations. This is rather easy to fix, however. A more serious question is if it would be possible to minimize the role of the oracle, since it’s now queried every time a decision needs to be made. There are two techniques for coping with this. Either we do a smarter traversal of the proof tree with e.g. divide and conquer, or we find a way to approximate the intended model of the program without the use of an oracle.

### Source code

The source code is available at https://gist.github.com/1351227.

# Meta-Programming in Prolog – Part 1

### Introduction

Meta-programming is part of the folklore in Prolog, and is in general a rather old concept with roots tracing back to at least the 50’s. To give a definition that captures all the relevant concepts is outside the scope of this introductory text, but I shall at least provide some pointers that’ll be useful later on. Programs are useful in many different domains. We might be working with numbers, with graphs, with lists or with any other data structure. What happens when the domain is another programming language? Well, nothing, really, from the computer’s point of view there’s no difference between this scenario and the former. But conceptually speaking we’re writing programs that are themselves working with programs. Hence the word “meta” in meta-programming. A compiler or interpreter is by this definition a meta-program. But in logic programming we’re usually referring to something more specific when we’re talking about meta-programming, namely programs that takes other logic programs as data. Since Prolog is a homoiconic language there’s also nothing that stops us from writing programs that takes other Prolog programs as data, but even though there’s a subtle distinction between this and the former scenario they are often referred to as one and the same. So, to summarize, when we’re talking about meta-programs in logic programming we’re quite often referring to Prolog programs that uses logic programs as data.

The road map for this post is to see some examples of meta-interpreters in Prolog. Then we’re going to use the interpreters to aid program development with a technique known as algorithmic debugging. But enough talk, let’s do this!

### Meta-interpreters

There’s still ample room for confusion regarding the word “meta” in meta-interpreter. I shall use the word whenever I refer to an interpreter for a logic programming language, even though this is not factually correct since one usually demands that the object language and the meta language are one and the same. That is: we write an interpreter for Prolog in Prolog. There are good reasons for not doing this. Prolog is a large and unwieldy language with many impure features such as cut, IO, assert/retract and so on, and when we’re working with meta-interpreters we’re often only interested in a small, declarative part of the language. Hence we shall restrict our focus to a programming language akin to pure Prolog which is basically just a set of Horn clauses/rules.

Even though we still haven’t decided the syntax for the object language we know that we must represent at least two things: facts and rules. Since a fact $A$ is equivalent to the rule $A \leftarrow true$ we can store these in the same manner. Assume that $P$ is a definite logic program. How should we represent it? As a list or a search tree? This could be a good approach if we were interested in implementing dynamic predicates in a declarative way, but since $P$ is static it’s much easier to just use the database and store everything as facts. For every rule $A \leftarrow B_1, ..., B_n \in P$, represent it as the fact $rule(A, [B_1, ..., B_n])$. If a rule only has the single atom $true$ in its body, i.e. it is a fact, then the second argument is the empty list. Obviously this is just one of many possible representations, but it’s simple to implement and work with.

As an example, here’s how we would write $append/3$:

rule(append([], Ys, Ys), []).
rule(append([X|Xs], Ys, [X|Zs]),[append(Xs, Ys, Zs)]).


Simple, but not exactly pleasing to the eye. Fortunately it’s easy to add some syntactic sugar with the help of Prolog’s term expansion mechanism. Instead of directly using $rule/2$ we can rewrite $append/3$ as:

append([], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
append(Xs, Ys, Zs).


And then define a suitable expansion object so that we end up with a set of $rule/2$ facts. This is a rather mundane and not very exciting programming task and hence omitted. Now on to the interpreter. It will be defined by a set of $prove/1$ clauses where the single argument is a list of goals. If you’ve never seen a meta-interpreter in Prolog before, you’re probably in for some serious disappointment since the program is so darn simple. So simple that a first reaction might be that it can’t possibly do anything useful. This first impression is wrong, however, since it’s easy to increase the granularity of the interpreter by implementing features instead of borrowing them from the Prolog system.

As mentioned the interpreter takes a list of goals as argument. This means that there’s a base case and a recursive case. In the base case of the empty list we are done. In the recursive case we have a list of the form $[G|Gs]$ where $G$ is the first goal that shall be proven. How do we prove $G$ then? By looking if there’s a corresponding rule $rule(A, [B_1, ..., B_n])$ where $A$ and $G$ are unifiable with mgu $\theta$ and recursively prove $([B_1, ..., B_n|Gs]) \theta$. In almost any other language this would be considerable work, but since Prolog is a logic programming language we already know how to do unification. Thus we end up with:

%Initialize the goal list with G.
prove(G) :-
prove1([G]).

prove1([]).
prove1([G|Gs]) :-
rule(G, B),
prove1(B),
prove1(Gs).


This is a prime example of declarative programming. We’ve only described what it means for a conjunction of goals to be provable and left the rest to the Prolog system. If you’re unsure why or how the interpreter works I urge you to try it for yourself.

### Extensions

To prove that I wasn’t lying before I shall illustrate some neat extensions to the bare-bone interpreter. Strictly speaking we don’t really need anything else since the language is already Turing complete. It’s e.g. trivial to define predicates that define and operate on the natural numbers. For example:

nat(zero) :- true.
nat(s(X)) :- nat(X).



But since these operations can be implemented much more efficiently on any practical machine it’s better to borrow the functionality. Hence we shall define a set of built-in predicates that are proved by simply executing them. The easiest way is to add a $rule/2$ definition for every built-in predicate.

rule(rule(A, B), []) :-
rule(A, B).
rule((X is Y), []) :-
X is Y.


Why the first clause? So that we can facilitate meta-programming and use $rule/2$ in our object language. I mentioned earlier that the interpreter as defined is not really a meta-interpreter in the strict sense of the word, and that Prolog is such a large language that writing meta-interpreters for it is probably not worth the hassle. But now we have a very restricted yet powerful language. Can we write a real meta-interpreter in that language? Yes! Actually it’s hardly any work at all since we already have the source code for the old interpreter.

prove(G) :-
prove1([G]).

prove1([]) :- true.
prove1([G|Gs]) :-
rule(G, B),
prove1(B),
prove1(Gs).


Glorious. Perhaps not very practical, but glorious.

### Building a proof tree

When our interpreter gives an answer it doesn’t provide any indication as to why that answer was produced. Perhaps the answer is in fact wrong and we want to localize the part of the code that is responsible for the error. The first step in this process is to build a proof tree. A proof tree for a goal $\leftarrow G$ and logic program $P$ is a tree where 1) the root is labeled $G$, and  2) each node has a child for every subgoal with respect to $P$. Hence the proof tree is more or less a representation of a sequence of trace steps.

It might sound like a complex task, but it’s really not. All we need is to extend the $prove/1$ predicate with an additional argument for the proof tree. In the base case of the empty list the tree contains the single node $true$. If $[G|Gs]$ are the current goals then we prove $G$ and $Gs$ and builds a proof tree from the recursive goals.

prove(G, T) :-
prove1([G], T).

prove1([], true).
prove1([G|Gs], ((G :- T1), T2)) :-
rule(G, B),
prove1(B, T1),
prove1(Gs, T2).


And when called with $G = append([a,b], [c,d], Xs)$ the resulting tree looks like this:

?- interpreter::prove(append([a,b], [c,d], Xs), T).
Xs = [a, b, c, d],
T = ((append([a, b], [c, d], [a, b, c, d]):- (append([b], [c, d], [b, c, d]):- (append([], [c, d], [c, d]):-true), true), true), true)


NB: this tree has a lot of redundant $true$ entries. How can we fix this?

### Summary

We’re now able to build proof trees. In the next entry we’re going to use them to localize errors in logic programs.

For a good discussion of meta-interpreters in Prolog the reader should turn to The Craft of Prolog by Richard O’Keefe. This post was just the tip of the iceberg. Another interesting subject is to experiment with different search rules, and for this I shamelessly promote my own bachelor’s thesis which is available at http://www.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:325247.

### Source code

The source code is available at https://gist.github.com/1330321.