Archive | Logic programming RSS for this section

Meta-Programming in Prolog – Part 2

Here is the story thus far: a meta-program is a program that takes another program as input or output. Based on this idea we wrote an interpreter for a simple logic programming language and later extended it to build a proof tree. A proof of concept, if you will. Meta-interpreters have lost a lot of steam in the last years. The reason being that they are just too hard to write in most popular programming languages. There’s no a priori reason that prevents us from writing a meta-interpreter in e.g. Python or Java, but the truth is that it’s such a lot of work that it’s not worth the trouble in most cases. The only exception that I can think of are integrated development environments which typically have at least some semantic awareness of the object language. But these languages doesn’t have a simple core and makes parsing awkward to say the least. In logic programming the situation is different. If an interpreter supports definite Horn clauses — facts and rules — and built-in operations it’s powerful enough to run quite a lot of real programs.

So what’s the purpose then? Is meta-programming just a sterile, academic exercise that has no place in real world software development? Since that was a rhetorical question, the answer is no. A resounding no! First, meta-interpreters are great for experimenting with new language features and implementation techniques. For instance we could ask ourself if it would be worthwhile to add support for new search rules in Prolog instead of defaulting to a simple depth-first search. Implementing a new search rule in a meta-interpreter can be done in a few hours, and the resulting program won’t be longer than perhaps a page of code (unless you screwed up, that is). Doing the same task in an imperative programming environment could take days or even weeks depending on the complexity of the existing code base. So meta-programming is useful for prototyping. What else? It can actually be a great aid in debugging. In the following sections we’re going to explain what debugging means in logic programming and develop a simple but functional system for squashing bugs.

Algorithmic debugging

Assume that we have a logic program P and a goal query \leftarrow G. Sterling and Shapiro cites three possible bugs in The Art of Prolog:

  1. The interpreter could fail to terminate.
  2. The interpreter could return a false solution G\theta. (incorrectness)
  3. The interpreter could fail to return a true solution G\theta. (insufficiency)

Since the first problem is undecidable in general we shall focus on the latter two. But first we need to decide what the words true and false means in this context, and in order to do that some remarks about the semantics of logic programs have to be made. If you’re feeling a bit rusty, I urge you to read up a bit on Herbrand models. Wikipedia and my own earlier post are both good starting points. The basic idea is fortunately rather simple. Logic formulas and programs can be viewed as specifications of models. A model is an interpretation in which the program is true. In general there are many, infinitely many, models of any given definite logic program. Which one should we choose? In a model we are free to reinterpret the non-logical vocabulary in any way we see fit. Consider the following logic program:

natural(zero).
natural(s(X)) \leftarrow natural(X).

It can be seen as a specification of either the set \{natural(0), natural(1), \ldots\} or the set \{natural(zero), natural(s(zero)), \ldots \}. Notice the subtle difference. The latter model is simpler in the sense that it doesn’t take us outside the domain of the textual representation of the program itself. Such models are known as Herbrand models. Could we be so lucky that Herbrand models are the only kind of models that we need to pay attention to? This is indeed the case. If a logic program has a model then it also has a Herbrand model. But we still need to pick and choose between the infinitely many Herbrand models. The intuition is that a model of a logic program shouldn’t say more than it have to. Hence we choose the smallest Herbrand model as the meaning of a logic program. Or, put more succinct, the intersection of all Herbrand models. For a logic program P, let M_P denote the smallest Herbrand model of P.

This is good news since we now know that every well-formed logic program has a meaning. Let’s return to the question of false solutions. This notion is only relevant if the programmer has an intended meaning that differs from the actual meaning of the program. In all but the most trivial programming tasks this happens all the time. An intended meaning I_P of a logic program P is the set of ground goals for which the program should succeed. Note the “should”. If we briefly return to natural/1, the intended meaning is nothing else than the actual meaning, i.e. the set \{natural(zero), natural(s(zero)), \ldots \}. With this terminology it’s possible to give a precise definition of incorrectness and insufficiency of a logic program P:

  1. P is incorrect iff M_P \not\subseteq I_P.
  2. P is insufficient iff I_P \not\subseteq M_P.

With these definitions we see that the natural/1 program is neither incorrect nor insufficient. But let’s introduce some bugs in it:

natural1(\_).
natural1(s(X)) \leftarrow natural1(X).

natural2(zero).
natural2(s(X)) \leftarrow natural2(s(X)).

Can you spot them? natural1/1 is incorrect since the base clause is too inclusive. M_P is not a subset of I_P since e.g. the element natural(-1) is not a member of I_P. In the same vein, natural2/1 is insufficient since it’s equivalent to just natural2(zero).

Quite a lot of legwork to explain something which is actually rather simple! What remains is to put everything in practice. Due to space constraints we’ll focus on the incorrectness problem.

Incorrectness

A logic program P is incorrect if it gives solutions that are not included in the intended model. In a real-world situation this means that the programmer has found a goal which the program should reject, but it doesn’t, and hence it contains at least one bug. The purpose is to find the part in the program that is responsible for the bug. In logic programming terms this is of course a clause. A clause A \leftarrow B is false iff B is true and A is false. The purpose of the algorithm is to traverse the proof tree and find such a clause. With this in mind we can at least write the top-level predicate:

   false_solution(Goal, Clause) :-
       %Build a proof tree.
       interpreter::prove(Goal, Tree),
       %Find a false clause.
       false_goal(Tree, Clause).

Well, that wasn’t too hard. What about false\_goal/2? The tree is of the form A \leftarrow B. Hence there are two cases: either B is false or it’s true. If it’s false, then we must continue the search in B. If it’s true, then the current clause is the clause that we’re looking for. To determine whether B is false we need an auxiliary predicate, false\_conjunction/2, where the first argument is the conjunction of nodes and the second argument is the false clause (if it exists).

   false_goal((A :- B), Clause) :-
       (    false_conjunction(B, Clause) ->
            true
       ;    Clause = (A :- B1),
            %Necessary since we don't want the whole tree.
            extract_body(B, B1)
       ).

By the way, this is a fine example of top-down development. In each step we’re breaking the original problem into easier problems and assume that we’re able to solve them later. false\_conjunction/2 is a bit trickier. The first argument is a conjunction of nodes of the form A \leftarrow B. Just like before there are two cases since A is either false or true. If it’s true, then we move on to the rest of the nodes. If it’s false, then we’d like to know whether B is true or false. Luckily we’ve already solved this problem before — a call to false\_goal/2 will do the trick just fine.

   false_conjunction(((A :- B), _Bs), Clause) :-
       query_goal(A, false),
       !,
       false_goal((A :- B), Clause).
   %Almost the same case as above, but with only one element.
   false_conjunction((A :- B), Clause) :-
       query_goal(A, false),
       !,
       false_goal((A :- B), Clause).
   false_conjunction((_A, As), Clause) :-
       %A is implicitly true.
       false_conjunction(As, Clause).

Only the most perplexing predicate remains: query\_goal/2. The second argument is true if A is true and false if it’s false. How can we know this? This is where the programmer’s intended model enters the picture. For now, we’re just going to use her/him as an oracle and assume that all choices are correct. The predicate is then trivial to write:

   query_goal(G, Answer) :-
       %Change later.
       write('Is the goal '),
       write(G),
       write(, ' true?'),
       nl,
       read(Answer).

In essence the user will be asked a series of questions during a session with the program. Depending on the answers, i.e. the intended model, the program will dive deeper and deeper into the proof tree in order to find the troublesome clause. As an example, here’s an append program where the base case is wrong:

append([_X], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
    append(Xs, Ys, Zs).

And the session with the program would look like this:

[1]  ?- debugging::false_solution(append([a,b,c], [d,e], Xs), Clause).
Is the goal append([b,c],[d,e],[b,d,e]) true?
|: false.
Is the goal append([c],[d,e],[d,e]) true?
|: false.
Xs = [a, b, d, e],
Clause = (append([c], [d, e], [d, e]):-true)

And we clearly see that it’s the base case that’s wrong.

Summary

The algorithm was taken from The Art of Prolog. Some simplifying assumptions have been made. Among other things there’s currently no support for built-in operations. This is rather easy to fix, however. A more serious question is if it would be possible to minimize the role of the oracle, since it’s now queried every time a decision needs to be made. There are two techniques for coping with this. Either we do a smarter traversal of the proof tree with e.g. divide and conquer, or we find a way to approximate the intended model of the program without the use of an oracle.

Source code

The source code is available at https://gist.github.com/1351227.

Advertisements

Meta-Programming in Prolog – Part 1

Introduction

Meta-programming is part of the folklore in Prolog, and is in general a rather old concept with roots tracing back to at least the 50’s. To give a definition that captures all the relevant concepts is outside the scope of this introductory text, but I shall at least provide some pointers that’ll be useful later on. Programs are useful in many different domains. We might be working with numbers, with graphs, with lists or with any other data structure. What happens when the domain is another programming language? Well, nothing, really, from the computer’s point of view there’s no difference between this scenario and the former. But conceptually speaking we’re writing programs that are themselves working with programs. Hence the word “meta” in meta-programming. A compiler or interpreter is by this definition a meta-program. But in logic programming we’re usually referring to something more specific when we’re talking about meta-programming, namely programs that takes other logic programs as data. Since Prolog is a homoiconic language there’s also nothing that stops us from writing programs that takes other Prolog programs as data, but even though there’s a subtle distinction between this and the former scenario they are often referred to as one and the same. So, to summarize, when we’re talking about meta-programs in logic programming we’re quite often referring to Prolog programs that uses logic programs as data.

The road map for this post is to see some examples of meta-interpreters in Prolog. Then we’re going to use the interpreters to aid program development with a technique known as algorithmic debugging. But enough talk, let’s do this!

Meta-interpreters

There’s still ample room for confusion regarding the word “meta” in meta-interpreter. I shall use the word whenever I refer to an interpreter for a logic programming language, even though this is not factually correct since one usually demands that the object language and the meta language are one and the same. That is: we write an interpreter for Prolog in Prolog. There are good reasons for not doing this. Prolog is a large and unwieldy language with many impure features such as cut, IO, assert/retract and so on, and when we’re working with meta-interpreters we’re often only interested in a small, declarative part of the language. Hence we shall restrict our focus to a programming language akin to pure Prolog which is basically just a set of Horn clauses/rules.

Even though we still haven’t decided the syntax for the object language we know that we must represent at least two things: facts and rules. Since a fact A is equivalent to the rule A \leftarrow true we can store these in the same manner. Assume that P is a definite logic program. How should we represent it? As a list or a search tree? This could be a good approach if we were interested in implementing dynamic predicates in a declarative way, but since P is static it’s much easier to just use the database and store everything as facts. For every rule A \leftarrow B_1, ..., B_n \in P, represent it as the fact rule(A, [B_1, ..., B_n]). If a rule only has the single atom true in its body, i.e. it is a fact, then the second argument is the empty list. Obviously this is just one of many possible representations, but it’s simple to implement and work with.

As an example, here’s how we would write append/3:

rule(append([], Ys, Ys), []).
rule(append([X|Xs], Ys, [X|Zs]),[append(Xs, Ys, Zs)]).

Simple, but not exactly pleasing to the eye. Fortunately it’s easy to add some syntactic sugar with the help of Prolog’s term expansion mechanism. Instead of directly using rule/2 we can rewrite append/3 as:

append([], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
    append(Xs, Ys, Zs).

And then define a suitable expansion object so that we end up with a set of rule/2 facts. This is a rather mundane and not very exciting programming task and hence omitted. Now on to the interpreter. It will be defined by a set of prove/1 clauses where the single argument is a list of goals. If you’ve never seen a meta-interpreter in Prolog before, you’re probably in for some serious disappointment since the program is so darn simple. So simple that a first reaction might be that it can’t possibly do anything useful. This first impression is wrong, however, since it’s easy to increase the granularity of the interpreter by implementing features instead of borrowing them from the Prolog system.

As mentioned the interpreter takes a list of goals as argument. This means that there’s a base case and a recursive case. In the base case of the empty list we are done. In the recursive case we have a list of the form [G|Gs] where G is the first goal that shall be proven. How do we prove G then? By looking if there’s a corresponding rule rule(A, [B_1, ..., B_n]) where A and G are unifiable with mgu \theta and recursively prove ([B_1, ..., B_n|Gs]) \theta. In almost any other language this would be considerable work, but since Prolog is a logic programming language we already know how to do unification. Thus we end up with:

%Initialize the goal list with G.
prove(G) :-
   prove1([G]).

prove1([]).
prove1([G|Gs]) :-
   rule(G, B),
   prove1(B),
   prove1(Gs).

This is a prime example of declarative programming. We’ve only described what it means for a conjunction of goals to be provable and left the rest to the Prolog system. If you’re unsure why or how the interpreter works I urge you to try it for yourself.

Extensions

To prove that I wasn’t lying before I shall illustrate some neat extensions to the bare-bone interpreter. Strictly speaking we don’t really need anything else since the language is already Turing complete. It’s e.g. trivial to define predicates that define and operate on the natural numbers. For example:

nat(zero) :- true.
nat(s(X)) :- nat(X).

add(zero, Y, Y) :- true.
add(s(X), Y, s(Z)) :-
   add(X, Y, Z).

But since these operations can be implemented much more efficiently on any practical machine it’s better to borrow the functionality. Hence we shall define a set of built-in predicates that are proved by simply executing them. The easiest way is to add a rule/2 definition for every built-in predicate.

rule(rule(A, B), []) :-
    rule(A, B).
rule((X is Y), []) :-
    X is Y.

Why the first clause? So that we can facilitate meta-programming and use rule/2 in our object language. I mentioned earlier that the interpreter as defined is not really a meta-interpreter in the strict sense of the word, and that Prolog is such a large language that writing meta-interpreters for it is probably not worth the hassle. But now we have a very restricted yet powerful language. Can we write a real meta-interpreter in that language? Yes! Actually it’s hardly any work at all since we already have the source code for the old interpreter.

prove(G) :-
    prove1([G]).

prove1([]) :- true.
prove1([G|Gs]) :-
    rule(G, B),
    prove1(B),
    prove1(Gs).

Glorious. Perhaps not very practical, but glorious.

Building a proof tree

When our interpreter gives an answer it doesn’t provide any indication as to why that answer was produced. Perhaps the answer is in fact wrong and we want to localize the part of the code that is responsible for the error. The first step in this process is to build a proof tree. A proof tree for a goal \leftarrow G and logic program P is a tree where 1) the root is labeled G, and  2) each node has a child for every subgoal with respect to P. Hence the proof tree is more or less a representation of a sequence of trace steps.

It might sound like a complex task, but it’s really not. All we need is to extend the prove/1 predicate with an additional argument for the proof tree. In the base case of the empty list the tree contains the single node true. If [G|Gs] are the current goals then we prove G and Gs and builds a proof tree from the recursive goals.

prove(G, T) :-
    prove1([G], T).

prove1([], true).
prove1([G|Gs], ((G :- T1), T2)) :-
    rule(G, B),
    prove1(B, T1),
    prove1(Gs, T2).

And when called with G = append([a,b], [c,d], Xs) the resulting tree looks like this:

?- interpreter::prove(append([a,b], [c,d], Xs), T).
Xs = [a, b, c, d],
T = ((append([a, b], [c, d], [a, b, c, d]):- (append([b], [c, d], [b, c, d]):- (append([], [c, d], [c, d]):-true), true), true), true)

NB: this tree has a lot of redundant true entries. How can we fix this?

Summary

We’re now able to build proof trees. In the next entry we’re going to use them to localize errors in logic programs.

For a good discussion of meta-interpreters in Prolog the reader should turn to The Craft of Prolog by Richard O’Keefe. This post was just the tip of the iceberg. Another interesting subject is to experiment with different search rules, and for this I shamelessly promote my own bachelor’s thesis which is available at http://www.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:325247.

Source code

The source code is available at https://gist.github.com/1330321.

Arcane Abuses of append

First I should point out that the following predicates hardly qualify as arcane, and they’re not really that abusive either. But they do use append, and one out of three isn’t so bad after all? append/3 is one of Prolog’s most useful predicates and is often one of the list predicates first taught to students. Beginners, and especially those familiar with other programming languages, sometimes have a hard time recognizing the multiple usages of the predicate however. Just for reference and to make sure that we’re on the same page, the usual definition goes like this:


   append([], Ys, Ys).
   append([X|Xs], Ys, [X|Zs]) :-
        append(Xs, Ys, Zs).

Nothing fanciful. Just a standard recursive predicate which holds if Zs is the list obtained when appending all the elements of Xs with all the elements of Ys. So when should we use this predicate? When we want to append two lists? No! Upon years of using Prolog I don’t think I’ve used append/3 for this purpose in a serious program even once. The reason being that difference lists are usually a much better choice in these instances since they can be appended in constant instead of linear time. So let’s try to figure out some other usages.

member(X, Xs) is true if X is a member of the list Xs. It’s of course not hard to write this as a recursive predicate as we did with append/3, but why bother if there’s an easier way? So let’s solve it with append/3 instead. Upon a first inspection it might not look like they have anything to do with each other. How can we find an element in a list by appending two lists? The answer is actually pretty simple. We know that we take a list, Xs, as argument. Can we find two other lists such that they give Xs when appended? Of course. Just call append/3 with Xs as the third argument. Remember that append/3 is a relation and not a function:

?- Xs = [a,b,c], append(A, B, Xs).
Xs = [a, b, c],
A = [],
B = [a, b, c] n
n
Xs = [a, b, c],
A = [a],
B = [b, c] n
n
Xs = [a, b, c],
A = [a, b],
B = [c] n
n
Xs = A, A = [a, b, c],
B = [] n
n
false.

That was the first step. Now let’s find an interpretation of membership that can be cast in terms of these three lists. How about this: X is a member of Xs if Xs can be divided into two parts, A and B, such that X comes between A and B. Put into code this is:

    member(X, Xs) :-
        append(_A, [X|_B], Xs).

Very easy once you know the trick, but difficult if one is afraid of using append/3 as a relation instead of a function. A similar problem is the sublist problem: given a list Xs, is Ys a sublist of Xs? Again it’s not hard to imagine how a recursive version would look, but perhaps we can find an easier solution with the help of append/3. A sublist is a continuous subsequence. This can be expressed in terms of three lists: Ys is sublist of Xs if there exists two lists, A and B, such that A appended with Ys and B results in Xs. That was quite a mouthful, but in essence it’s the same thing as we did with member/2 with the difference being that we’re looking for a list instead of a single element. Assume that we had the predicate append/4. Then sublist could be solved as:

sublist(Xs, Ys) :-
    append(_A, Ys, _B, Xs).

Alas, since we don’t have such a predicate we’re going to use append/3 two times instead. First Xs is divided into A and B. Then we find the sublist Ys by saying that Ys is a suffix of B.

    sublist(Xs, Ys) :-
        append(_A, B, Xs),
        append(_, Ys, B).

It should be noted that this solution gives rise to many duplicate answers. Why? Assume that Xs = [a,b]. Then the answer Ys = [b] can be found by first binding B to [a,b] and then Ys to the prefix [b] of this list. Or it can be found by binding B to [b] and then binding Ys to the prefix [b] of B. This is a bummer since we’re only interested in one of these answers. The implementation of an optimized version is left as an exercise to the reader.

select/3, last/2 and other basic list processing predicates can be implemented in essentially the same manner. As a last example we’re going to implement nth/3 with append/3 and length/2. nth(X, Xs, N) is true if X is the N:th member of Xs, starting from 0. One observation is enough to give us a solution: X is the N:th element of Xs if the number of elements preceding Xs is equal to N. This is easy to check with length/2:

    nth(X, Xs, N) :-
        append(A, [X|_], Xs),
        length(A, N).

A question to the observant reader: why is the order of the two goals in the body not swapped? Also, as a concluding remark: I’ve been told that it’s not always a good idea to do something just because you can. That might well be true. This version of nth/3 is rather inefficient and I would not recommend anyone to try it at home!

Prolog’s Makin’ Music – Part 3

This time we’ll have a look at some techniques for automatically generating music — or rather, to be more accurate, melodies. Since we’ve deduced that a musical scale is a mathematical structure in which it’s possible to perform all the standard operations, we have quite a lot of freedom when it comes to the choice of a suitable formalism. We’ll also make some simplifications to make the job easier: namely that our melodies consists of a single list of notes where all notes are assumed to be of equal importance, e.g. played in the same timbre and tempo. This means that the resulting melodies won’t be that pleasing to the ear, but there’s of course nothing that stops us from using one of these melodies as a building block in a larger musical composition. I suppose that we still need musicians for something!

Lindenmayer systems

A Lindermayer system, or an L-system, is a formal grammar quite similar to a context-free grammar. The goal is to rewrite a starting string, the axiom, by applying as many rules as possible. A rule is simply an if-then statement of the form: if the token is X then replace it by Y. Formally speaking this information can be summarized as a tuple:

L = (V, \omega, P)

Where V is a set of variables, \omega the starting axiom and P the set of production rules, i.e. functions from V to the language. The symbols that don’t appear in V, the constants, are always left untouched. The first example on Wikipedia is an Algae system. It has two variables, A and B, A as starting axiom and the two rules:

A \rightarrow AB

B \rightarrow A

So the strings that will be produced are: A, AB, ABA, ABAAB, and so on. It shouldn’t be hard to see how the rules were applied. First the axiom was used. Then, the rule for A was used which produced AB. Then the rules for both A and B were used which produced AB and A, i.e. ABA.

We are free to interpret the structure in an L-system in any way we see fit. For example, we could interpret A in the Algae system as “play the first note in the scale” and B as “play the second note in the scale”. I shall however use something a bit closer to the Logo notation that is commonly used to visualize L-systems. It consists of the following commands:

  • f — draw forward.
  • + — turn right.
  • -  — turn left.
  • s — push the current point on the stack.
  • r — pop an entry from the stack.

But since we’re working with scales, and not images, we have to reinterpret these commands. I propose the following:

  • f — play the current note.
  • + — increase the current note.
  • - — decrease the current note.
  • s — push the current note on the stack.
  • r — pop an entry from the stack.

Hence we’re going to use L-systems that produce strings in this format. From such a string it’s then possible to extract a melody. For example, the string "f+f-f" could be interpreted as the notes 0,1,0.

We’ll return to this later. For now, let’s concentrate on implementing L-systems in Logtalk. This can be done in a large number of ways, but once we’ve chosen a suitable representation everything else will more or less follow automatically. Every L-system will be represented by an axiom and a set of production rules for both variables and constants. Since the production rules take symbols as argument and produces strings/lists, DCG’s are a fine choice. For the moment we can ignore everything else and just stipulate what an L-system is.

:- protocol(l_system).

   :- public(rule//1).
   :- public(axiom/1).

:- end_protocol.

:- object(algae,
    implements(l_system)).

   axiom([a]).

   rule(a) --> [a,b].
   rule(b) --> [a].

:- end_object.

Then we’ll need a predicate that takes a string as input and applies all applicable production rules. Since the rules themselves are written in DCG notation, it’s easiest to continue with this trend. The predicate will take a string and an L-system as input, and iteratively apply the rules for the elements in the string.


next([], _) --> [].
next([X|Xs], L) -->
    L::rule(X),
    next(Xs, L).

And all that remains is a predicate that calls next//2 for a predetermined number of generations. It’s more or less a standard loop: if N is 1, then the resulting string is the axiom of the L-system. Otherwise, recursively run the L-system for N - 1 generations and then run it once more.

generation(1, L, X) :-
    L::axiom(X).
generation(N, L, X) :-
    N > 1,
    N1 is N - 1,
    generation(N1, L, Y),
    phrase(next(Y, L), X, []).

This is almost too easy! For reference, let’s also implement an L-system that makes use of the Logo commands previously discussed.

:- object(koch_curve,
    implements(l_system)).

    axiom([f]).

    rule(-) --> [-].
    rule(+) --> [+].
    rule(f) --> [f,+,f,-,f,-,f,+,f].
:- end_object.

This structure is known as a Koch curve, and when interpreted as drawing commands it looks like:

Now we’ll need a predicate that transforms a list of commands into a list of notes. It’ll need 4 input arguments:

  • Xs — the list of commands.
  • Scale — the scale that the notes shall be generated according to.
  • N — the starting/current note.
  • S — the stack.

And one single output argument:

  • Ns — the resulting list of notes.

It’s not that hard to implement since it  only consists of a case-analysis of the command list. For example, if the command list is empty then the list of notes is empty. If the command is f, then we add the current note N to Ns, and so on for all the other commands.

    transform([], _, _, _, []).
    transform([f|Cs], Scale, N, S, [N|Ns]) :-
        transform(Cs, Scale, N, S, Ns).
    transform([-|Cs], Scale, N, S, Ns) :-
        Scale::lower(N, N1),
        transform(Cs, Scale, N1, S, Ns).
    transform([+|Cs], Scale, N, S, Ns) :-
        Scale::raise(N, N1),
        transform(Cs, Scale, N1, S, Ns).
    transform([s|Cs], Scale, N, S, Ns) :-
        transform(Cs, Scale, N, [N|S], Ns).
    transform([r|Cs], Scale, _, [N|S], Ns) :-
        transform(Cs, Scale, N, S, Ns).

Putting everything together

We can now generate command strings from L-systems and convert these into notes in a given scale. What remains is to convert the notes into frequencies with a specific duration. These can then be converted into samples and be written to a WAV file.

    generate_notes(L, I, Scale, Notes, Number_Of_Samples) :-
       l_systems::generation(I, L, X),
       Scale::nth(0, Tonic),
       l_systems::transform(X, Scale, Tonic, [], Notes0),
       findall(F-0.2,
              (list::member(Note, Notes0),
               Scale::frequency(Note, F)),
              Notes),
       length(Notes, Length),
       synthesizer::sample_rate(SR),
       Number_Of_Samples is Length*(SR/5).

The value 0.2, the duration of each note, is of course just an example and can be changed at whim. This is all we need in order to crank out some simple tunes. Luckily, I’ve already prepared some samples for your auditory pleasure.

Koch curve in C major

This is the curve depicted earlier. To be frank it sounds kind of dreadful, but fortunately the other samples are somewhat more interesting. Next up is the dragon curve!

Dragon curve in C major

Dragon curve in C minor

I think it sounds much better than the Koch curve, but that might be due to the fact that I view my creations with rose-tinted eyes; unable to see the unholy abomination that is their true form. Let’s have a look at the Hilbert curve.

Hilbert curve in C major

Hilbert curve in C minor

Catchy! The last L-system is a fractal plant.

Fractal plant in C major

Fractal plant in C minor

I think the results are quite interesting, and this is only the tip of the iceberg since it’s possible to create any kind of L-system and interpret it as a melody. The whole set is available at Soundcloud.

I initially intended to include a section in which I created a Prolog interpreter that for each refutation also produced a melody, but the time is already running out. It’s not impossible that I’ll return to the subject at a later date however!

Source code

The source code is available at https://gist.github.com/1034067.

Prolog’s Makin’ Music – Part 2

Scales, scales and scales

It’s time to bring on the noise! To recapitulate the story thus far, it suffices to say that we’re now able to write raw audio data in the WAV format after some mysterious bit-fiddling. As already mentioned we could in principle start to crank out tunes in this stage, but to illustrate why this isn’t a good idea I have prepared a small sample file containing 1 million randomly generated samples in the range [-32768, 32767]. It is perhaps also instructive to see a visual representation of the samples. Here’s the results:

Soundcloud link.
As expected the result is garbled noise. There’s obviously no discernible structure to speak of since the samples are literally speaking all over the place. We want something more harmonic and symmetrical that we can actually work with. Why does classical instruments (excluding drums, cymbals and other unpitched instruments) have such nice acoustic properties? To make a long story short, many instruments produce sounds with the help of vibrating strings – oscillations at different frequencies, e.g. a sine wave. Different frequencies give us different tones. In e.g. a piano the keys to the right have higher frequencies than those to the left. Hence, to construct something akin to an instrument we need a system of  frequencies and a function that can generate the corresponding waves. Obviously these problems have already been solved many times over in the history of music theory, and it would be ignorant to not take advantage of this. Let’s start with the first problem of finding a system of frequencies, a scale. This is actually harder than expected. We know that the scale should go from lower to higher frequencies and that there at the very least should exist some relationship between them. A first attempt might be to start the scale at an arbitrary frequency, e.g. 500, and for every new frequency add 50. This would result in a scale where the difference between any two adjacent frequencies is constant, or in other words linear. With 12 frequencies we would obtain:

0 500
1 550
2 600
3 650
4 700
5 750
6 800
7 850
8 900
9 950
10 1000
11 1050

Our first scale! The first number in the column, the identification number, is called a note (to be more precise a note also needs a duration). Hence the purpose of the scale is to give frequencies to notes. The traditional notation (pun not intended) for a series of 12 notes is A, A\sharp, B, C, C\sharp, D, D\sharp, E, F, F\sharp, G, G\sharp, where the notes with funny looking sharp signs correspond to the small, black keys on a piano (so-called “accidentals”). For simplicity we’ll use the numeric notation though. The next question is how this scale sounds when it is played in succession.

Linear scale

Perhaps surprisingly, it sounds pretty terrible even though it’s not that simple to say why. Wrong starting frequency? Wrong increment? Well, maybe, but that’s not the real problem, namely that as the frequencies increase the perceived difference, the distance, gets smaller and smaller which results in an unsymmetrical scale. Hence we want a scale where the distance between any two adjacent frequencies is a constant. This is known as equal temperament. To be more precise the distance doesn’t have to be a constant, but it have to be a multiple of the smallest possible step in the scale. For example we could have a scale where the distance between the first and second frequency is 1.5, but where the distance between the second and third frequency is 1.5 \times 2 = 3.

With this in mind it’s not to hard to create a new scale. The frequency of the N:th note is then Start * Ratio^{N-1}, where Start and Ratio are constants. For Start = 500 and Ratio = 1.06978 we get the scale:

0 500.0
1 534.89
2 572.2
3 612.1
4 654.9
5 700.6
6 749.4
7 801.7
8 857.7
9 917.5
10 981.6
11 1050.0

Equal temperament scale

To be honest it still doesn’t sound very good, but at least it’s a step in the right direction. Somehow it doesn’t provide enough closure, and if we were to extend it even further the new notes wouldn’t really relate to the old notes in a natural way (what is “natural” is of course biased by experience and tradition). Here’s an idea: what if the extended frequencies were (approximately) multiples of the first 12 frequencies? That is F_{12} \approx 2 \times F_0, F_{13} \approx 2 \times N_1 and so on. It’s not too hard to derive such a constant. Let x be the constant. Then F_{12} = 2\times F_0 = F_0 \times x ^{12} \Leftrightarrow \frac{2\times F_0}{F_0} = x^{12} \Rightarrow 2^{1/12} = x. Hence the general formula is F_n = F_0 \times x^{n} = F_0 \times (2^{1/12})^n = F_0 \times 2^{n/12}.

If the starting frequency is 500 we then get the scale:

0 500.0
1 529.7
2 561.2
3 594.6
4 629.0
5 667.4
6 707.1
7 749.2
8 793.7
9 840.9
10 890.9
11 943.9

Equal temperament scale – second attempt

It’s hard to notice the difference in the first few notes, but in the end of the scale the difference gets more and more pronounced. Now we have something quite close to what’s actually used in the real world. The only difference is the starting frequency, which is usually 440 Hz, the so-called standard concert pitch. This value is somewhat arbitrary, but just for reference here’s what we get:

0 440.0
1 466.2
2 493.9
3 523.3
4 554.4
5 587.3
6 622.3
7 659.3
8 698.5
9 739.9
10 783.9
11 830.6

Chromatic scale

Fortunately it’s rather easy to implement scales once we have the theory behind us. There are two basic choices for the representation: either  we work with the raw frequencies in the scale, or we work with the notes and extract the frequencies when needed. I shall go with the second option since it’s often easier to work with notes. Interestingly enough, the chromatic 12 tone scale that we just used is an example of an abelian (commutative) group with 0 as the unit element, which means that it’s quite pleasant to work with. The basic operations that we want to perform are:

  • raise/2 – get the next note in the scale.
  • lower/2 – get the preceding note in the scale.
  • add/3 – add two notes in the scale.
  • length/1 – get the number of notes in the scale.
  • nth/2 – get the n:th note in the scale, starting from 0.
  • frequency/2 – get the associated frequency of the note.

Which is easily expressible in terms of a protocol:

:- protocol(scalep).

   :- public(raise/2).
   :- public(lower/2).
   :- public(add/3).
   :- public(nth/2).
   :- public(length/1).
   :- public(frequency/2).

:- end_protocol.

And to implement the chromatic scale is straightforward:

:- object(chromatic_scale,
        implements(scalep)).

    %A, A#, ..., G, G#.
    length(12).

    raise(N, N1) :-
        N1 is (N + 1) mod 12.

    lower(N, N1) :-
        N1 is (N - 1) mod 12.

    add(N1, N2, N3) :-
        N3 is (N1 + N2) mod 12.

    nth(I, I) :-
        % Used so that we can call nth/2 with uninstantiated
        % arguments.
        between(1, 12, I).

    %A4 to G#5.
    frequency(N, F) :-
        F is 440 * 2 ** (N/12).

:- end_object.

Extending this scale to use more than 12 elements would of course not be hard either. Just to show something different we’re also going to implement the C major scale. It contains the frequencies:

0 523.3
1 587.3
2 659.3
3 698.5
4 783.9
5 880
6 987.8

The C major scale

It’s slightly harder to implement than the chromatic scale since the distances between adjacent notes is not constant. The distance between any two adjacent notes is either a half step (the distance between two adjacent notes in the chromatic scale) or two half steps. If we then represent each note with its distance from the first note we get:

0 0
1 2
2 4
3 5
4 7
5 9
6 11

Don’t worry if these specific distances doesn’t make any sense to you. But they are not completely arbitrary; each note in C major corresponds to a white key on the piano, and is actually the only major scale that only makes use the white keys. Since we are now counting half-steps we can more or less use the same formula as in the chromatic scale for calculating frequencies.

:- object(c_major,
        implements(scalep)).

    nth(0, 0).
    nth(1, 2).
    nth(2, 4).
    nth(3, 5).
    nth(4, 7).
    nth(5, 9).
    nth(6, 11).

    raise(N, N1) :-
        nth(I1, N),
        I2 is ((I1 + 1) mod 7),
        nth(I2, N1).

    lower(N, N1) :-
        nth(I1, N),
        I2 is ((I1 - 1) mod 7),
        nth(I2, N1).

    % As far as I know, this is the only way to make sense of addition
    % in C major. Simply adding the distance from the tonic doesn't work
    % since that makes it possible to get notes outside the scale.
    add(N1, N2, N3) :-
        nth(I1, N1),
        nth(I2, N2)
        I3 is ((I1 + I2) mod 7),
        nth(I3, N3).

    % C, D, E, F, G, A, B.
    length(7).

    %C5 to B5.
    frequency(N, F) :-
        F is 440 * 2 ** ((N + 3)/12).

:- end_object.

The synthesizer

Whenever we’re going to generate music we’re going to use a specific scale in order to get a linear sequence of notes (since we don’t use chords). From the notes we get a series of frequencies. But to actually produce something that is nice to listen to we need something more. To play e.g. the standard concert pitch at 440 Hz we’re going to generate a wave with 440 oscillations per second. How we generate this wave determines how the note will be played. A sine wave will give a smooth sound while a sawtooth wave will give something reminiscent of a mechanical dentist drill. To create more complex sounds a technique known as additive synthesis can be used. We shall however not peruse this option at the moment.

Our synthesizer will take 3 input arguments: the frequency, the duration and the filter that shall be applied, and returns a list of samples in its single output argument. From the duration it’s possible to calculate how many samples that we’ll need to generate with the help of the sample rate. For example, if the duration is 0.5 seconds and the sample rate is 22050 the number of samples is 0.5 \times 22050 = 11025. The wave will be generated with a loop from 0 to Number\_Of\_Samples where the following operations are performed on each sample:

  • Divide the sample by the sample rate, so that we get the correct resolution. A high sample rate means that we’ll generate more points on the wave.
  • Calculate the angular frequency of the sample, i.e. \omega = 2\pi F, where F is the frequency.
  • Apply the filter. The filter should return a floating point number in [-1, 1].
  • Scale the sample in [-1, 1] with a volume factor so that we get samples in the full sample space.
This can actually be done in rather few lines of code. Without further ado I present to you:
:- object(synthesizer).

    :- public(samples/4).
    :- public(sample_rate/1).
    :- public(bits_per_sample/1).

    :- private(filter/3).
    :- private(volume/3).
    :- private(wave//3).

    bits_per_sample(16).
    sample_rate(22050).

    samples(Frequency, Duration, Filter, Samples) :-
        sample_rate(SR),
        N is floor(SR * Duration),
        phrase(wave(N, Frequency, Filter), Samples).

    %% We could have implemented this as higher order predicates
    %% instead, but the performance loss would not have been worth it
    %% since the filter might be applied to millions of samples.
    filter(sine, Sample0, Sample) :-
        Sample is sin(Sample0).
    filter(sawtooth, Sample0, Sample) :-
        Sample is Sample0 - floor(Sample0).
    filter(triangle, Sample0, Sample) :-
        Sample is -((acos(sin(Sample0)) / pi - 0.5)*2).

    volume(M, N, V) :-
        bits_per_sample(BPS),
        V0 is (2**BPS)/2 - 1,
        %% Decrease the volume over time.
        Percent is (M/N)/2,
        V is V0*(1 - Percent).

    wave(N, Freq, F) --> wave(0, N, Freq, F).
    wave(M, N, _, _) --> {M > N}, [].
    wave(M, N, Freq, F) -->
        {M =< N,
        sample_rate(SR),
        M1 is M + 1,
        volume(M, N, V),
        X is (2*pi*Freq)*M/SR,
        filter(F, X, Sample0),
        Sample is floor(Sample0*V)},
        [word(2, little, Sample)],
        wave(M1, N, Freq, F).

:- end_object.

Putting everything together

Somehow we’ve come this far without a suitable name for the project. I’ll name it Xenakis in honor of the Greek-French music theorist, composer and architect Iannis Xenakis. You can listen to one of his most famous pieces here (warning: it’s rather frightening).

Using the components just described is not hard. First one generates a list of frequencies in a scale, that is then used as input to the synthesizer which gives a list of samples which is written to a WAV file.

:- object(xenakis).

   :- public(init/0).

   init :-
       %% N is the number of samples.
       generate_notes(Ts, N),
       wav::prepare(output, N),
       write_samples(Ts).

   %% Generate the frequencies in the C major scale. Each note has a
   %% duration of 0.5 seconds.
   generate_notes(Ts, N) :-
       Scale = c_major,
       findall(F-0.5,
               (Scale::nth(_, Note),
                Scale::frequency(Note, F)),
               Ts),
       Scale::length(L),
       synthesizer::sample_rate(SR),
       N is L*SR/2.

   %% Write the notes to 'output'.
   write_samples([]).
   write_samples([F-D|Fs]) :-
        synthesizer::samples(F, D, sine, Samples),
        wav::write_audio(output, Samples),
        write_samples(Fs).

:- end_object.

All the scales that are available on Soundcloud were of course generated using this method. We now have a good foundation for the next installment where we at last will look at methods for automatically generating note sequences.

Source code

The source code is available at https://gist.github.com/1007820.

Mullets, Moustaches and MicroPROLOG

There was more to the eighties than funny looking moustaches, mullets and screaming electrical guitars. I’m of course talking about the home computer revolution: the era when the end-user was more or less expected to have a solid understand of how the hardware and software worked. Some time ago I thought it would be a great idea to deepen my knowledge of this period, and did what any self-respecting man or woman would do in my situation: grow a moustache – ehr, I mean: read a book about it. The book in question is named The Personal Computer Handbook (Swedish: Allt du behöver veta om persondatorer), and is precisely the kind of book that I would normally scoff at. If it was written today that assessment would probably be correct, but it managed to surpass my wildest imaginations. Just look at this image from the second page, and you’ll probably understand why:

That’s the future right there! And the book keeps delivering. Instead of explaining how one uses a word processor it delves deep in technical details about electronics, peripherals, computer architectures, micro code, assembly code and finally high-level languages such as BASIC, LOGO, COBOL, LISP and finally the Prolog dialect microPROLOG. The text goes as follows (disclaimer: this is my translation from the Swedish book. The English version probably differs!):

At Imperial College in London, a team under the direction of Robert Kowalski has worked on a simplified version of PROLOG (called microPROLOG) intended to foster the children’s capability in logical thinking. This capability is not restricted to computers, but can also be applied in other areas such as mathematics, French, history and geographics.    

While LOGO’s Turtle Graphics is based on the computer’s ability to create images and figures, microPROLOG is focused on the computer’s ability to manipulate symbolic expressions. But Turtle Graphics has proved to be a such successful system, that it has been included in some versions of microPROLOG.

Simple PROLOG programs can be built from propositions that contains names of individuals and relations between them.

One first constructs atomic propositions (so-called because they are the simplest possible propositions.) For example: Anders is-neighbor-with Lisa. Lotta is-friend-with Nicke. Lisa is-fighting-with Anders.

One can also construct molecular (composite) propositions, for example: Maria likes Johan if Johan likes Maria and Johan is-nice-to Maria. Or: x is-larger-than y if x is-larger-than z and z is-larger-than y. One can ask questions to the computer that are based on these facts and relations. Writing microPROLOG programs is like writing simplified and logically clear English, and the children becomes excited by the fact that the computer understands a language so close to their own.

MicroPROLOG is just like BASIC an interactive language, where the pupil can add, remove and modify the program and immediately see the result of the modification. Individuals in simple programs can be replaced by variables or lists, for example: Johan has-favourite-groups (Police, Roxy Music, ABBA).

PROLOG can handle such lists by first dividing them into a head and a tail. The head is the first element in the list, and the tail is the rest. The head in Johan’s list is therefore “Police”, and the tail is everything else in the parenthesis. The same head-tail structure is also found in LISP and LOGO.

Recent studies show that education in BASIC programming learn children how they should use the computers of the seventies, while education in LOGO and microPROLOG prepares them for the computers of the eighties and the nineties.

I can’t help but be a little moved by such unbridled enthusiasm! What went wrong? When I went to middle-school they didn’t even teach us a Basic dialect! Perhaps it’s time to revitalize the grand plans of microPROLOG and create a new generation of Prolog hackers?

Finally, I feel obliged to post one final picture from the book. It’s referred to as The computer controlled home, and should speak for itself.

Man, the future is such a letdown!

(For the record, the next installment regarding computer generated music will probably be delayed for a few weeks due to an upcoming examination period.)

Prolog’s Makin’ Music – Part 1

Interlude

Gather around everyone, and I’ll tell the story of how I sold my soul to the binary devil.

It all began a dark and gloomy night. I’ve had one too many to drink – coffee, that is – and found it hard to concentrate on anything else than the splashing rain. The murky light played tricks on my eyes, or so I thought. Dangling contours everywhere. The buzzing monitor didn’t help either. I stretched out my back with a loud, cracking sound and tried to suppress a yawn.

“Do you want the power to create music from thin air?”

A voice from nowhere. Surely I hadn’t had that much to drink. I held up my keyboard like a club, cursing myself for getting rid of the IBM model M keyboard in favor of an ergonomic one, and slowly turned my head in the direction of the voice. If there was an intruder, I wouldn’t go down without a fight.

“Who’s there?”, I cried.

After a long silence the voice finally answered:

“Do you want to make a deal?”

“A deal?!” I blurted out, getting rather annoyed by his impudence.

“I shall grant your computer the gift of making music. All I ask in return is that your next blog entry contains some steamy, bit-on-bit action that somehow involves the WAV format. Also, I shall need your soul for all eternity.”

Having run out of ideas, I had no choice but to accept his offer.

“Sure! Wait, no!… Who are you?”

A manic laughter followed. He vanished in a hazy puff of smoke and left. All that remained was a chilly wind and a feeling that I had somehow been cheated.

Computer generated music

Now to the point: the goal of this and the following entries will be to create computer generated music in Prolog/Logtalk. That might sound (pun not intended – I can’t help it) like a tall order, but hopefully everything will become clearer once we’ve explicated some of the concepts in music theory. The outline is as follows:

  • Step 1 – Generate audio.
  • Step 2 – Generate tones from audio.
  • Step 3 – Generate melodies from tones, with a suitable formalism such as a cellular automata or an L-system.

Sound as oscillations

In order to generate music we first need to understand what sound is. Wikipedia says:

Sound is a mechanical wave that is an oscillation of pressure transmitted through a solid, liquid, or gas, composed of frequencies within the range of hearing and of a level sufficiently strong to be heard, or the sensation stimulated in organs of hearing by such vibrations.

Or to put it a bit more pragmatic: a sound is a series of frequencies. Of course, this is a bit too simplistic to be useful in practice. Among other things, we need to decide whether we’re interested in mono or stereo sound, how fine-grained each frequency should be and how fast they should be played.

So we have an idea of how sound should be represented. First we decide how it should be interpreted by the listener, and then we give out the actual frequencies. As one might suspect there exists a myriad of different formats for this purpose. One of the simplest is the WAV format, which we shall use in this project.

Writing to binary files

WAV is a binary format, and thus consists of a sequence of integers of varying sizes. Hence the first step is to learn how one writes to binary files in Prolog. The bad news is that there only exists one ISO primitive for this purpose: put\_byte/2, which is not sufficient since it only works for single byte, signed integers. The good news is that we can get it to do what we want with some low-level bit-fiddling. Here’s the operations that we’ll need in order to produce a fully functional WAV file:

  • Write 4 byte, unsigned integers in big endian format.
  • Write 4 byte, unsigned integers in little endian format.
  • Write 2 byte, unsigned integers in little endian format.
  • Write 2 byte, signed integers in little endian format.

It would be nice if we could handle this in a uniform way, so that the underlying details of how one should use put\_byte/2 can be postponed as far as possible. For this purpose we’ll introduce a data structure, word, that has the format:

word(Byte\_Count, Endian, Integer)

where Byte\_Count is either 2 or 4, Endian is either big or little, and Integer is a positive or negative integer. So to represent the number 135  in the little endian format we would use:

word(2, little, 135)

while the number 1350 in big endian format would represented as:

word(4, big, 1350)

Simple, but it might feel kind of weird to represent such a low-level concept in this way. In most imperative languages we would of course explicitly declare the data as either char, short, int and so on, but this is the best we can do in Prolog (unless we create necessary bindings for the host language and borrow some datatypes). Next, we’re going to define write\_word/2 that writes a word to a stream. Let’s focus on 2 byte integers for the moment. A first attempt might look like:

write_word(word(2, Endian, I), Stream) :-
    put_byte(Stream, I).

Alas, this only works for single byte integers. If we want to write 2 bytes, we need to extract the individual bytes from the integer and call put\_byte/2 two times. This can be done with shifting and the bitwise and-operation.

write_word(word(2, Endian, Bs), S) :-
    X1 is Bs >> 8,
    X2 is Bs /\ 0x00ff,
    (  Endian = big ->
       put_byte(S, X1),
       put_byte(S, X2)
    ;  put_byte(S, X2),
       put_byte(S, X1)
    ).

Note that we also check whether Endian is big, and if so output the bytes in reversed order. This works fine for positive numbers, but what about signed, negative numbers? Since put\_byte/2 only works with positive numbers, we need to convert the negative number into a positive number that is still negative with respect to that byte range. This is actually rather easy to do since we’re using two’s complement numbers: if the number is negative, then add  a number such that the sum is the two’s complement of the absolute value of the negative number. The code will make this easier to understand:

    write_word(word(2, Endian, Bs), S) :-
        Bs >= 0,
        X1 is Bs >> 8,
        X2 is Bs /\ 0x00ff,
        (  Endian = big ->
           put_byte(S, X1),
           put_byte(S, X2)
        ;  put_byte(S, X2),
           put_byte(S, X1)
        ).
    write_word(word(2, Endian, Bs), S) :-
        Bs < 0,
        Bs1 is Bs + 0xffff,
        write_word(word(2, Endian, Bs1), S).

(Thanks to Pierpaolo Bernardi who showed me this trick on the SWI-Prolog mailing list!)
Update: Richard O’Keefe also showed a simpler solution that doesn’t need the explicit positive/negative test. It’s left as an exercise to the reader!

The code for 4 byte integers is rather similar and hence omitted.

The WAV format

Now let’s focus on WAV. All my knowledge of the format stems from a single source (click for a useful, visual diagram). A WAV file consists of:

  • A header containing the string “RIFF”, the remaining chunk size and the string “WAVE”.
  • A format subchunk containing the string “fmt” (format), the remaining chunk size, the audio format, the number of channels, the sample rate, the byte rate, the block align and the number of bits that are used for each sample.
  • A data subchunk that contains the string “data”, the remaining size of the subchunk and finally the actual data (the samples).

Don’t worry if some of these terms are unfamiliar or confusing. It’s not necessary to understand all the details. We begin by defining the number of samples, the number of channels, the bits per sample and the sample rate as facts:

    num_samples(100000). %This value will of course differ depending on the audio data.
    num_channels(1). %Mono.
    bits_per_sample(16). %Implies that each sample is a 16 bit, signed integer.
    sample_rate(22050).

All the other values can be derived from these parameters. For simplicity we’re going to produce a list of words that are later written with the help of write\_word/2. This can be done in any number of ways, but DCG’s are fairly straightforward in this case. The RIFF chunk is first. It takes the size of the data chunk as argument since it is needed in order to produce the size of the remaining chunk.

    riff_chunk(Data_Chunk_Size) -->
        riff_string,
        chunk_size(Data_Chunk_Size),
        wave_string.

    riff_string --> [word(4, big, 0x52494646)].
    wave_string --> [word(4, big, 0x57415645)].

    chunk_size(Data_Chunk_Size) -->
        {Size is Data_Chunk_Size + 36}, % Magic constant!
        [word(4, little, Size)].

The end result will be a list of the form [word(4, big, 0x52494646), ...]. The format chunk follows the same basic structure:

fmt_chunk -->
    fmt_string,
    sub_chunk1_size,
    audio_format,
    number_of_channels,
    sample_rate,
    byte_rate,
    block_align,
    bits_per_sample.

fmt_string -->  [word(4, big, 0x666d7420)]. %"fmt".

sub_chunk1_size --> [word(4, little, 16)]. %16, for PCM.

audio_format --> [word(2, little, 1)]. %PCM.

number_of_channels -->
    [word(2, little, N)],
    {num_channels(N)}.

.
.
. % And so on for all the remaining stuff.

The remaining data chunk is even simpler:

data_chunk(Data_Chunk_Size) -->
    data_string,
    [word(4, little, Data_Chunk_Size)],
    test_data.

test_data --> ... %This should generate a list of samples.

And finally, we say that a WAV file consists of a riff chunk, an fmt chunk and a data chunk:

    wav_file -->
        {num_samples(N),
         bits_per_sample(BPS),
         num_channels(Cs),
         Data_Chunk_Size is N*BPS*Cs/8},
        riff_chunk(Data_Chunk_Size),
        fmt_chunk,
        data_chunk(Data_Chunk_Size).

It is used in the following way:

    output(File) :-
        open(File, write, S, [type(binary)]),
        %Call the DCG, get a list of words as result.
        phrase(wav_file, Data),
        %Write the list of words.
        write_data(Data, S),
        close(S).

    write_data([], _).
    write_data([B|Bs], S) :-
        write_word(B, S),
        write_data(Bs, S).

As test data, we’re going to generate a 440HZ sine wave.

    sine_wave(0) --> [].
    sine_wave(N) -->
        {N > 0,
        sample_rate(SR),
        N1 is N - 1,
        %% Standard concert pitch, 440 Hz.
        Freq is 440,
        ScaleFactor is 2*pi*Freq/SR,
        %% Needed since sin(X) returns an integer in [-1, 1], which
        %% is barely (if at all) perceivable by the human ear. The
        %% constant 32767 is used since we're dealing with 16 bit,
        %% signed integers, i.e. the range of the samples is [-32768,
        %% 32767].
        VolumeFactor is 32767,
        X is ScaleFactor*N,
        Sample0 is sin(X),
        %% Floor the sample. Otherwise we would end up with a floating
        %% point number, which is not allowed.
        Sample is floor(Sample0*VolumeFactor)},
        [word(2, little, Sample)],
        sine_wave(N1).

It’s not necessary to understand all the details, but the end result is a list of 2 byte words that represent a 440 HZ sine wave. You can listen to it here.

Summary

We’re now able to write samples to WAV files. These samples can represent any tone or sound, so in theory we already have everything that’s needed to generate music. But representing a tune as millions and millions of samples is not very user-friendly and would make it more or less impossible to automatically generate anything interesting. For that we’re going to need further abstractions, and among other things define a sound bank that contains some common tones.

Source code

The source code is available at https://gist.github.com/955626.

Prologomenon Goes Viral

Apparently, social medias like Twitter is all the rage these days. It’s been used to convey critical information in emergency situations, helped revolutions in suppressed governments, and now it’s finally time to overthrow the corrupt imperative languages that have grown fat on the labors of the working class…. Kidding – some of my best friends are imperative languages! I have however started a Twitter account in relation with the blog with the intent to a) troll imperative languages, b) post code-snippets within the character limit that accomplishes something cool. I’ve currently implemented:

All just for fun, of course. Don’t try this at home unless you’re sure you’re able to use cryptic variable/predicate names and abuse all the language features that normally allows one to write readable, concise source code.

An Adventure Game – Part 5

We’re rapidly approaching the end. This time we’ll implement a meta-language that makes it easier to create new games with the existing engine. Conceptually speaking we’re not adding anything new to the table, but the example game from the previous post was created in an ad-hoc manner that demanded knowledge of both Prolog/Logtalk and the engine in question. What we want is a declarative language in which it’s possible to define rooms, entities and how they should be connected. Exactly how this language is interpreted should not be the game designer’s concern. This is a scenario in which it’s best to start with the language itself, since it’s pretty much impossible to write an interpreter otherwise.

The language

We want the language to be capable of:

  • Creating entities.
  • Adding properties to existing entities.
  • Setting new values for entities. For instance, if we first create a door with the default lock we’ll probably want to change this later on.
  • Adding entities to rooms.
  • Connect two rooms with an entrance.

All these steps should be expressible within a single file. The first part might look something like (as always, I’m just making stuff up on the fly, it’s quite possible that there are simpler/better ways to accomplish this!):

begin entities.

build(room, room1, "A rather unremarkable room.\n").
build(room, room2, "...").

build(door, door, "...").

build(lock, lock).

build(key, key, "A slightly bent key.\n").

build(player, player).

end entities.

Where build(Type, Id) should be read as: build an entity of type Type and name it Id. build/3 allows us to create printable entities, where the string is the description. This suggests that we’ll need a builder-object that’s capable of constructing some default entities. Since entities consist of properties, it would be possible to only build generic entities and manually add all the interesting properties in question, but we make the job easier for the game designer if he/she can assume that some primitive objects are already available. Of course, it wouldn’t be practical to demand that all entities are created in this manner. If two entities are identical except for a few differing properties then it would be simpler to create a generic entity and add the properties manually instead of defining a new build-predicate. For example, say that our game will consist of two different kinds of fruits: magical and non-magical. If we eat the former we finish the game, if we eat the latter we increase the health of the player. This is naturally implemented by creating two different properties: fruit\_property and magic\_fruit\_property. Hence, to create two fruits – one magical and one non-magical –  we first create two generic instances and then add the defining properties.

begin entities.

build(generic_entity, apple, "An apple! \n").
build(generic_entity, banana, "A yellow banana. Hot diggity dog!\n").

end entities.

begin properties.

add_property(fruit_property, apple).
add_property(carriable_property, apple).

add_property(magic_fruit_property, banana).
add_property(carriable_property, banana).

end properties.

The Argus-eyed reader will probably realize that it would be ever better to factorize the common properties (carriable\_property) of the two fruits into a prototypical base object, and then clone this object and later add the unique properties (magic\_fruit\_property and fruit\_property).

Since the entities that have been created thus far only has the default values we now turn to the problem of sending messages, so that we’re able to change these at whim. Say that we want to tell the lock that it should use the key that we just created, and the door that it should use the lock. A first attempt might look like this:

begin relations.

action(set_key, lock, key).
action(set_lock, door, lock).

end relations.

All identifiers here refer to the entities that have already been created. This won’t work however, due to a subtle semantic difference between how locks and doors work. A door has a lock, it consists of a lock. Therefore it’s correct to send the lock entity as an argument to set\_lock. A lock on the other hand doesn’t consist of a key. It only needs to know what key will unlock it, hence it’s not correct to send the whole key entity as argument. We only need one part of the key entity, its identity, in this case. To be able to differentiate between these cases we’ll introduce the notation that a term preceded by a dollar sign ($) will be sent as-is, instead of sending the entity corresponding to the identity. The previous attempt should hence be rewritten as:

begin relations.

action(set_key, lock, $ key).
action(set_lock, door, lock).

end relations.

The file will be interpreted from top to bottom, so if we added a property in the preceding block we’re able to change it here. The next step is to connect rooms. Strictly speaking this relation is not necessary since we’re already able to send messages to entities, but including it as a primitive in the language will make it much easier to use. The syntax is:

begin relations.
.
.
.
connect(room1, room2, door).

end relations.

The full example game in all its glory would be written as:

begin entities.

build(room, room1, "A rather unremarkable room.\n").
build(room, room2, "A room almost identical to the previous one. What on earth is going on!?\n").
build(door, door, "A wooden door with a small and rusty lock.\n").
build(lock, lock).
build(key, key, "A slightly bent key.\n").
build(generic_entity, apple, "An apple! \n").
build(generic_entity, banana, "A yellow banana. Hot diggity dog!\n").
build(player, player).

end entities.

begin properties.

add_property(fruit_property, apple).
add_property(carriable_property, apple).

add_property(magic_fruit_property, banana).
add_property(carriable_property, banana).

end properties.

begin relations.

action(set_key, lock, $ key).
action(set_lock, door, lock).

action(set_state, door, $ closed).

action(add_item, room1, apple).
action(add_item, room1, key).

action(add_item, room2, banana).

action(set_location, player, $ room1).

connect(room1, room2, door).

end relations.

A substantial improvement in readability compared to the previous efforts!

Parsing

Another boring, dry entry on parsing? Fear not, because I have a trick up my sleeve -there was a reason why the syntax of the meta-language was a spitting image of Prolog’s all along! One way to interpret the file is to say that begin/end and the dollar sign are all operators with a single argument. Then the file is nothing but a collection of facts that can be accessed as normal and we won’t have to worry about parsing at all. A slightly more contrived but more general approach is to use what in Prolog-nomenclature is known as term expansion. This is usually the preferred approach to handle embedded languages and is somewhat similar to macros in Lisp. The basic idea is simple: instead of taking a term at face-value we expand it according to a user-defined rule. What’s the point? Basically we don’t have to type as much. For example, let’s say that we have a database consisting of rule/3 facts, where the first argument is an atom, the second a list and the third an integer denoting the length of the list.


rule(a, [], 0).

rule(b, [a], 1).

Furthermore assume that we don’t want to calculate the length of the second argument at run-time. There’s nothing inherently wrong with this approach, but manually entering the length of the list is a drag and quite error-prone. It would be better if we could simply write:


rule(a, []).

rule(b, [a]).

And tell the Prolog system that these facts should be construed as rule/3 facts with an additional third argument which contains the length of the list. Fortunately this can easily be realized with the built-in predicate term\_expansion/2. The first argument of term\_expansion is the term that shall be expanded. The second argument is the expanded term. A suitable definition for the previous example is:

term_expansion((rule(Head, Body)), rule(Head, Body, Length)) :-
    length(Body, Length).

Great success! We don’t have to concern ourselves with how or when this predicate is called, just that it will eventually be called when the file is loaded. Like all powerful language constructs it’s easy to abuse the term expansion mechanism and create unreadable code. We could for instance expand A :- B to B :- A if we were in a facetious mood (don’t do it, OK?). Fortunately the Logtalk support is a bit more sane than in most Prolog systems. Instead of defining term\_expansion/2 rules in the same file as the rules that shall be expanded, they’re encapsulated in a special expansion object. This object is later used as a hook in logtalk\_load/2 to make sure that the effect is localized to a given file. In summary, there’s two steps involved:

  • Define the term\_expansion/2 rules in an expansion object (which must implement the expanding protocol).
  • Load the file (the script file in our case) with the expansion object.

I shan’t spell out the full details of the expansion object, but what it does is to remove the begin/end directives and create a set of entity/1 facts with the initial entities. Also, to make things easier in the script interpreter, it removes add\_property/2, action/3, connect/3 and replaces them with unary facts instead.

The builder

The builder is in charge of building game entities. As mentioned earlier it’s not strictly needed since it’s always possible to add properties manually, but it does simplify things. Here’s how a map and a room could be created:

    build(world, Id, Rooms, Player, World) :-
        Ps = [map_property-State],
        build(final_state, F),
        map_property::new([[F|Rooms], Player], State),
        entity::new(Ps, Id, World).

    build(room, Id, Description, Room) :-
        Ps = [container_property - State1,
              printable_property - State2],
        container_property::new([], State1),
        printable_property::new([Description], State2),
        entity::new(Ps, Id, Room).

Interpreting

We have a description of the game and want to transform it into an entity that has the map\_property. Strictly speaking this is not an interpreter, but rather a compiler from the script language to the entity language. The most important predicate is interpret\_game/2 that takes an expanded script file as argument and produces a game world. It works by extracting the entity directives, the property directives, the relations directives and then separately interprets each one of these. Finally it extracts the rooms and the player and asks the builder to build a game world.

   interpret_game(DB, World) :-
       findall(E, DB::entity(E), Es0),
       findall(P, DB::add_property(P), Ps),
       findall(A, DB::action(A), As),
       findall(C, DB::connect(C), Cs),
       interpret_properties(Ps, Es0, Es1),
       interpret_actions(As, Es1, Es2),
       interpret_connectors(Cs, Es2, Es),
       get_rooms(Es, Rooms),
       get_player(Es, Player),
       builder::build(world, world1, Rooms, Player, World).

The interpret\_x predicates are all rather similar. They iterate through the list of commands and changes the set of entities accordingly. For brevity, let’s concentrate on interpret\_actions/3.

    interpret_actions([], Es, Es).
    interpret_actions([t(M, Id1, Id2)|As], Es0, Es) :-
        select_entity(Id1, Es0, E0, Es1),
        lookup_argument(Id2, Es0, Arg),
        entity::action(M, [Arg], E0, E),
        interpret_actions(As, [E|Es1], Es).

    lookup_argument(Id, Es, Arg) :-
        (   Id = $(Symbol) ->
            Arg = Symbol
        ;   lookup_entity(Id, Es, Arg)
        ).

The body of interpret\_actions/3 should be read as: execute action M on the entity corresponding to Id1 with the argument Id2 (remember that arguments preceded by a dollar-mark are left unchanged). Since we might need to update an entity several times, it’s re-added to the list of entities whenever it’s updated.

Putting everything together

We need to make a small change to init/0 in the game object. Instead of building a world manually it’ll take a script file as argument and ask the interpreter to interpret (compile!) it.

    init(Game) :-
         write('Welcome to Bacchus-Bosch!'), nl,
         current_input(S),
         script_interpreter::interpret_game(Game, World),
         repl(S, [], World).

That’s pretty much it – the end of Bacchus-Bosch. As I suspected when I started the project the final game isn’t very fun. Wait, scratch that, it might be the worst game I’ve ever played, and that includes the infamous yellow car game. But it does have a magic banana, that ought to count for something. In any case it wouldn’t be hard to create more engrossing games since all the building blocks are in place. It should also be noted that the engine is hardly limited to text-based or turn-based games. In a real-time game we could for instance run the update method a fixed amount of times per second instead of waiting for player input. We could also add e.g. role playing elements by defining new properties.

I hope that this pentalogy has been at least somewhat comprehensible and coherent. What the future holds for the blog I dare not promise. Hopefully we’ll someday see the return of the magic banana!

Source code

The source code is available at https://gist.github.com/924052.

An Adventure Game – Part 4

And yet again I display exceptional prowess in the (un)holy art of procrastination. This time I blame lack of coffee, Sweden’s gloomy weather and the Illuminati. This post will concern two of the three remaining obstacles. First we’re going to add enough properties so that we can create a map consisting of a few rooms together with some entities. Second, we’re going to translate the parsed commands from the user into a suitable set of action commands and execute them with respect to the game world and its entities. As always it’s best to start with a use-case scenario and extrapolate some requirements.

> look

You see a small, rusty key and a door.

> pick up the key and open the door.

> go through the door.

This room is almost identical to the previous one. How eerie. You see a banana and a door.

> eat the door.

no.

> eat the banana

no.

> take the banana and eat it.

Congratulations! You solved mystery of the missing banana!

Let’s don our declarative thinking cap. This particular game consists of two rooms and a player. The player starts in the first room, picks up the key, unlocks the door and enters the second room where he/she in a moment of absolute intellectual clarity manages to deduce that the only way to beat the game is to eat the banana. Just like in real life. We can summarize the requirements as:

  • The player must be able to move from one location to another.
  • The player must be able to reference entities by their names.
  • The door must be linked to the second room.
  • The key must be able to lock/unlock the door.
  • The banana must have the property that the game stops when the player eats it.

Fortunately we can already handle some of these. The most fundamental requirement is that of referencing entities. Since each entity is just a list of properties we’re currently unable to distinguish them in any sensible way. The most obvious solution is to add another property, identity\_property, which describes the property of having an identity. For example, the door in the scenario would be represented by the list:

[openable\_property-State_1, printable\_property-State_2, ..., identity\_property-State_n]

For simplicity the state of identity\_property is just going to be an atom, e.g. door. In a real implementation it would of course be preferable to use a more complex data structure so that entities can be referenced by attributes as well, e.g. “wooden door”, but the basic idea is the same. Of course, just because this is a nice, declarative solution doesn’t mean that it’s good. To quote Richard O’Keefe from The Craft of Prolog:

The nasty thing about declarative programming is that some clear specifications make incredibly bad programs.

It’s not to hard to see that storing the identity of an entity as a property is hopelessly inefficient. To find an entity in a container we have to iterate through the whole container and check whether or not every element satisfies the identity property in question. Ouch. It would be a much better idea to demand that all entities have an identity and then store the container as a tree where nodes are entities ordered by their identities. Of course, it doesn’t mean that I’m going to use this solution just because it’s better! Sometimes it’s enough to be aware that something is a potential bottleneck and fix it if/when it becomes a real problem. Or buy a faster computer.

Next up is the problem of the game world. We shall introduce a new property by the name map\_property and stipulate that it consists of a list of rooms and a player. Why not add the player as an item in the current room instead? Just for simplicity; it’s slightly easier to move the player from room to room if we don’t have to explicitly remove him/her from the first room and add him/her to the new one. Since we have removed the player from the rooms we’re going to need another property, that of having a position/being movable, so that it’s always possible to find the current room.

:- object(movable_property,
    extends(property)).

    new(identity_property-_).

    action(move, [New], Owner, _, Owner, New).

    action(get_location, [Room], Owner, Room, Owner, Room).
:- end_object.

The state of movable\_property is simply an identity property. When someone issues the move command the current location is changed. Exactly how this works should become clearer later on. For now, let’s concentrate on implementing map\_property. Its state will be a tuple of a list of rooms and the player, and it’ll have commands to add/remove rooms, get the current room, update the rooms and so on.

:- object(map_property,
    extends(property)).

    new([]-[]).

    update(E0, E) :-
        entity::update_property(map_property, E0, Rooms0-P0, Rooms-P, E),
        entity::update(P0, P),
        update_rooms(Rooms0, Rooms).

    update_rooms([], []).
    update_rooms([R0|R0s], [R|Rs]) :-
        entity::update(R0, R),
        update_rooms(R0s, Rs).

    action(add_rooms, [Rooms1], Owner, Rooms2-P, Owner, Rooms-P) :-
        ...

    action(get_room, [Property, R], Owner, Rooms-P, Owner, Rooms-P) :-
        ...

    action(select_room, [Property, R], Owner, Rooms-P, Owner, Rooms1-P) :-
        ...

    action(print, [], Owner, Rooms-P, Owner, Rooms-P) :-
        action(current_room, [Room], Owner, Rooms-P, _, _),
        entity::action(print, [], Room, _).

    action(current_room, [Current], Owner, Rooms-P, Owner, Rooms-P) :-
        entity::action(get_location, [Id], P, _),
        list::member(Current, Rooms),
        entity::get_property(Current, Id).

    action(get_player, [P], Owner, Rooms-P, Owner, Rooms-P).

:- end_object.

It’s not necessary to study the details of this particular implementation, but some of the predicates demand an explanation. update/2 uses update\_property/5 in entity to update map\_property with the state obtained by updating the list of rooms and the player. To put it more simply: it just calls update/2 on the rooms and the player. The print command extracts the current room and prints it (it would not be very interesting to print anything else). The current\_room command gets the current location of the player and then uses member/2 to find the room with that particular identity.

Then there are doors. To have a text-game without lots of doors would simply be madness. Given the representation of the game world, each door must contain the identity of the area which it leads to. It just stores the identity, not the area itself (since all areas are stored in the map\_property). We shall store this identity in entrance\_property:

:- object(entrance_property,
    extends(property)).

    new(identity_property-_).

    action(get_location, [Location], Owner, Location, Owner, Location).
:- end_object.

So when we create a door we plug in an entrance\_property with a suitable identity of a room.

Translating commands to entity actions

Before we begin the translation process we’re going to define some useful action commands in container\_property and map\_property. The most frequently used action will be that of extracting  an entity according to some property (e.g. the identity), perform some state-changing operation and then adding it anew. This procedure is necessary since we don’t have explicit state: whenever we extract an entity we get a copy of it, and changes to this copy won’t affect the original. For this purpose we’re going to define two additional action commands in container\_property and map\_property that takes three arguments:

  • P – the property of the entity that shall be updated.
  • Old – will be unified with the old entity.
  • New – the new entity.
:- object(container_property,
    extends(property)).

     .
     .
     . % As before.

    action(update_item, [P, Old, New], Owner, Items, Owner, [New|Items1]) :-
        list::select(Old, Items, Items1),
        entity::get_property(Old, P).

    .
    .
    . % As before.
:- end_object.

The neat thing about this definition is that we can extract an entity with Old and simply pass a variable as New, and unify this variable to the updated entity later on. The definition in map\_property is similar, but works for the player and the current room.

:- object(map_property,
    extends(property)).

    .
    .
    . % As before.

    action(update_current_room, [Current0, Current],
           Owner, Rooms-P, Owner, [Current|Rooms1]-P) :-
        entity::action(get_location, [Id], P, _),
        list::select(Current0, Rooms, Rooms1),
        entity::get_property(Current0, Id).

    action(update_player, [P0, P], Owner, Rooms-P0, Owner, Rooms-P).

    .
    .
    . % As before.

:- end_object.

Now we can finally begin with the translation process. Since we know that the input from the user will be a list of commands (remember that conjunctions are allowed) we will execute them one by one and thread the state of the game world.

eval_commands([], World, World).
eval_commands([C|Cs], World0, World) :-
    write('The command is: '), write(C), nl,
    eval_command(C, World0, World1),
    eval_commands(Cs, World1, World).

Each eval\_command/3 rule changes the state of World0 to World1 according to the command in question. The simplest one is the look-command with no arguments, that just prints the current room:

    eval_command(look-[], World, World) :-
        entity::action(current_room, [Room], World, _),
        write('You see: '), nl,
        entity::action(print_children, [], Room, _).

It asks the game world for its current room and then issues the print\_children command. The take-command is slightly more convoluted. It takes an identity as argument, tries to find the entity in question and asks the player to pick it up.

eval_command(take-[Id], World0, World) :-
    entity::action(update_current_room, [R0, R], World0, World1),
    entity::action(update_player, [P0, P], World1, World),
    entity::action(select_item, [identity_property-Id, Item], R0, R),
    entity::action(add_item, [Item], P0, P).

The move-command is perhaps the most complex of the bunch, but follows the same basic structure:

    eval_command(move-[Id], World0, World) :-
        entity::action(current_room, [Room], World0, _),
        entity::action(update_player, [P0, P], World0, World),
        entity::action(get_item, [identity_property-Id, Entrance],
                       Room, _),
        entity::action(open, [], Entrance, _),
        entity::action(get_location, [Location], Entrance, _),
        entity::action(move, [Location], P0, P).

It tries to find and open the entrance in the current room, asks it where it leads and finally asks the player to move to that location. The lock/unlock and open/close commands are implemented in the same way. One problem remains though: it’s possible to take the key, unlock the door, open it and go through it, but no way to actually finish the game. Just like everything else this functionality can be implemented in a number of ways. It might be tempting to somehow augment the top-loop and in every iteration check whether or not the final state have been reached, but this is needlessly complicated. Instead we’re going to introduce a special entity with only two properties, that of having the identity final\_state and that of being printable. It’s constructed as:

    build_win_screen(Screen) :-
        Screen = [printable_property - State, identity_property-final_state],
        State = "Congratulations! A winner is you!\n (No, you can't quit. Stop trying.)\n".

Then we need an object that has the effect that it asks the player to move itself to the final state whenever it is used, for instance a banana.

:- object(fruit_property,
    extends(property)).

    action(dissolve, [E0, E], Owner, State, Owner, State) :-
        entity::action(move, [identity_property-final_state], E0, E).

:- end_object.

The banana entity is created by combining an identity, a printable, a carriable and a fruit property:

    build_test_banana(Banana) :-
        Banana = [fruit_property - State1, printable_property-State2,
                  identity_property-banana, carriable_property-State3],
        fruit_property::new(State1),
        banana_description(State2),
        carriable_property::new(State3).

Then we of course need an eat-command, but this is straightforward to implement. So what happens when the player eats the banana is that the current location changes to final\_state. This room doesn’t have any entities and doesn’t support any operations besides being printed, which means that the player can’t return to the rest of the game world and have completed the game.

Putting everything together

We shall use the top-loop from part 2, but with some modifications. The input will be parsed into commands that are executed with respect to the current game world. The loop then calls itself recursively with the new state and asks for new input.

    init :-
         write('Welcome to Bacchus-Bosch!'), nl,
         current_input(S),
         build_test_world(World),
         repl(S, [], World).

    repl(S, History, World0) :-
        entity::update(World0, World1),
        entity::action(print, [], World1, _),
        write('> '),
        nlp::parse_line(S, Atoms),
        write('The input is: '),
        meta::map([X] >> (write(X), write(' ')), Atoms), nl,
        nlp::tag_atoms(Atoms, AtomTags),
        write('The tagged input is: '),
        meta::map([X] >> (write(X), write(' ')), AtomTags), nl,
        (   eval(History, AtomTags, World1, World) ->
            true
        ;   write('no.'), % This is Prolog after all.
            nl,
            World = World1
        ),
        write('-------------------'), nl,
        repl(S, AtomTags, World).

    eval(History, AtomTags, World, World1) :-
        nlp::resolve_pronouns(History, AtomTags, AtomTags1),
        nlp::parse_atoms(AtomTags1, _, Commands),
        eval_commands(Commands, World, World1).

Like a professional TV-chef I prepared a small test world and got the following result (with some of the debug output omitted):

Welcome to Bacchus-Bosch!
A rather unremarkable room.
> look

You see:
A slightly bent key.
A wooden door with a small and rusty lock.
——————-
A rather unremarkable room.
> open the door

no.
——————-
A rather unremarkable room.
> take the key and unlock the door with it

——————-
A rather unremarkable room.
> go through the door
——————-
A room almost identical to the previous one. What on earth is going on!?
> look

You see:
A yellow banana. Hot diggity dog!
A wooden door with a small and rusty lock.
——————-
A room almost identical to the previous one. What on earth is going on!?
> take the banana
——————-
A room almost identical to the previous one. What on earth is going on!?
> eat it
——————-
Congratulations! A winner is you!
(No, you can’t quit. Stop trying.)

The final herculean task, the creation of a script-language, is saved for the next entry!

Source code

The source code is available at https://gist.github.com/900462.