Writing a BASIC Interpreter — Part 4

Alternative title: how to implement floating point arithmetic from scratch (the hard way)

Introduction

The above picture might not look terribly impressive, but a closer inspection should reveal something strange: doesn’t the precision seem to be a bit too low for a modern Prolog system (sadly, I’m not running my interpreter in MICRO-PROLOG)?

In this entry I grab the bull by the horns and completely rewrite the arithmetic predicates of the interpreter, previously relying on Prolog’s built-in is/2, to my homebrewed, purely logical implementation. I’d be the first one to admit that this might not sound like a worthwhile endevour, but of all the things I’ve considered in this project, this is perhaps the only contribution which in principle might be of some use. Writing a high-level interpreter for a programming language is much easier than writing a full-blown emulator, and not all language features need a low-level implementation. Hence, with this entry I want to show that we even in Prolog, where we in general have little control over such matters, can simulate low-level features of imperative programming languages without too much effort. The problem is that it’s rather unlikely that there’s any BASIC-80 software still in use where inaccurate floating point arithmetic would be important, but one never knows!

Constraints

I decided to to implement these arithmetic operations without using Prolog’s built-in arithmetical support (is/2 and friends). Mainly because it’s fun to build something from scratch, but also to refrain from using is/2 unless it actually simplifies the problem to be solved. The sole exception to this rule were predicates for outputting integers and floating point numbers where it’s more helpful to print the (floating point) number represented by the byte sequence rather than just printing the internal representation, and it’s easier to display these values if one is allowed to use exponentiation and multiplication.

Representation

Before we begin I have a confession to make: I initally made a quite poor choice regarding representation and then stubbornly stuck with it. ‘Poor’ is perhaps not the right word; ‘bizzare’ is probably closer to the truth. For the moment, let’s ignore floating point numbers and simply assume that we want to represent natural numbers. The easy representation, which can be found in many textbooks and which is typically taught to students, is to represent the number ‘0’ by itself and then represent any larger number by a nested term of the form s(…(s(0))). This is sometimes called a unary representation. What’s the problem? Representing a number n requires a nested term of depth n which is exponentially worse than a binary representation where a number n can be represented with only log(n) bits. Such an inefficiency which would propagate through all arithmetical predicates of interest (addition, multiplication, and so on), and render any non-trivial program more or less useless. Moreover, while (a generalisation of) this representation could be used to represent rational numbers, it would not be meaningful to use it to represent floating point numbers in a fixed format, nor to accurately simulate e.g. overflow. Thus, our representation of numbers (and later on, floating point numbers) is going to be as sequences of bits, for which we have two obvious choices.

  • As a list of bits [b1, …, bn].
  • As a flat term bits(b1,…,bn).

But since we want to represent both integers (2 bytes) and floating point numbers in the MSB format (4 bytes), the latter representation would result in a lot of redundant and tedious code (just trust me on this!). However, there’s a compromise: we could represent bytes as flat terms of the form byte(b1,…,b8) and then (e.g.) represent an integer as int([Byte1, Byte2]). Then we would only have to write low-level predicates operating on bytes, but not arbitrary sequences of bits, which would be very inconvenient when working with flat terms. My arguments in favour of this representation when compared to the bit list representation were as follows:

  • A list of bits is the wrong representation since some operations which should be O(1) are O(n), where n is the number of bits.
  • A hard coded byte representation makes it easier for a structure sharing system to do its job properly, and could also facilitate tabling (a form of memoization). E.g., tabling addition or multiplication for all floating point numbers is probably too much, but we could very easily table byte addition, and obtain a significant speed-up.
  • Even if predicates working directly on bytes are a bit cumbersome to define, we only have to define a small number of such predicates, and then floating point and integer operations can be handled in a reasonably clean way.

These arguments are not wrong , but not terribly impactful either. Does it really matter if a right-shift operation takes a bit more time than it theoretically should, and is it not better to start with the more general solution and then choose a more efficient representation if the general one turns out to be to slow? Definitely, which makes it a bit embarrasing to present the code in the following sections.

Bytes

As already remarked, we’ll represent a byte by a term byte(b1, …, b8) where each b_i is a bit. For example, byte(1,0,0,1,0,0,0,1) would represent the byte 10010001. Since bytes consists of bits, let’s begin by writing a predicate which adds two bits, before we worry about solving the larger problem, and once we know how to add two bytes, we’ll worry about integer and floating point addition. But how can we accomplish this without using any built-in predicates? I’m quite convinced that if this question was asked during a programming interview then not that many would obtain an acceptable solution, but it’s actually very simple: just hardwire the required information. Thus, if the two given bits are 0, the third bit should be 0, and if one of them is 1 but the other one is 0, the result should be 1. In other programming languages we would encode this with a handful of if-statements, and in Prolog the easiest solution is to represent each such case by a fact.

add_bits(0,0,0).
add_bits(0,1,1).
add_bits(1,0,1).

But what if the two given bits are 1? Then the result should be 0, but with 1 in carry. Hence, we’ll have to add an additional argument describing the carry value, which we’ll for simplicity add as the last argument.

add_bits(0,0,0,0).
add_bits(0,1,1,0).
add_bits(1,0,1,0).
add_bits(1,1,0,1).

This is not bad, but now imagine the situation where we want to add two bytes. Then the strategy should be to add the least significant bits, remembering the carry, and then use the carry (out) as additional input when adding the two next bits (carry in), and so on. Hence, what we really want to describe is the relationship between two given input bits, a carry in bit, an output bit, and a carry out bit. It’s easy to see that this doubles the number of cases that we have to consider, and we obtain the following program (where the carry in bit is the fourth argument and the carry out bit is the fifth argument).

add_bits(0,0,0,0,0).
add_bits(0,1,1,0,0).
add_bits(1,0,1,0,0).
add_bits(1,1,0,0,1).

add_bits(0,0,1,1,0).
add_bits(0,1,0,1,1).
add_bits(1,0,0,1,1).
add_bits(1,1,1,1,1).

Let me just quickly digress that while this solution does require us to explicitly list all cases of interest, it’s conceptually not harder, and much easier to understand, than a solution making use of is/2. It also results in a more general program: we could e.g. ask a query of the form

add_bits(X1,X2,1,0,0)

which would give us the answer that X1 and X2 are either 0 and 1, or 1 and 0. Before we turn to byte addition we should make one observation: quite a few of the predicates which we’ll define are going to make frequent use of carry in/out, but manually having to keep track of carry in/out, and remembering which argument is which, is going to be quite tedious and error prone. A better solution would be to implicitly thread this state using definite clause grammars (DCGs). This is the same solution that I used to thread the state of the interpreter so if it sounds unfamiliar it might be a good idea to re-read the previous entry. Hence, carry in and carry out will be described by state//2 and we’ll understand grammar rules (written with –>) as describing a sequence of carry in/out transitions. We thus rewrite add_bits/5 as follows.

add_bits(0,0,0) --> state(0,0).
add_bits(0,1,1) --> state(0,0).
add_bits(1,0,1) --> state(0,0).
add_bits(1,1,0) --> state(0,1).
add_bits(0,0,1) --> state(1,0).
add_bits(0,1,0) --> state(1,1).
add_bits(1,0,0) --> state(1,1).
add_bits(1,1,1) --> state(1,1).

If you’re struggling to make sense of DCGs, or are of the opinion that DCGs should only be used for parsing, then simply imagine that we with the above programming scheme makes it possible for rules to have carry in/out without explicitly needing to be declared as arguments. This is going to be quite convienent later on so it’s certainly worth the effort.

Let’s now turn to the problem of adding two bytes. I’ve already outlined the algorithm, but I suspect that the solution that I’m going to present might be quite disturbing to some readers. How can we iterate over a byte term byte(b1, …, b8)? The problem is that such a term is not a ‘recursive’ data structure (like a tree or a list) and it’s rather offputting to attempt to do any form of general iteration. While we certainly could implement a crude variant of a for loop by hardwiring the arithmetic required to proceed to the next iteration (remember that I’m trying to solve this without using is/2), the resulting code would be far from elegant and needlessly complicated. In fact, since a byte has a fixed length, it’s much easier to just add the bits manually, starting with the least significant bits in position 8, and finishing with the most significant bits in position 1. This can be accomplished as follows.

add_byte(byte(X1,X2,X3,X4,X5,X6,X7,X8), byte(Y1,Y2,Y3,Y4,Y5,Y6,Y7,Y8), byte(Z1,Z2,Z3,Z4,Z5,Z6,Z7,Z8)) -->
    add_bits(X8, Y8, Z8),
    add_bits(X7, Y7, Z7),
    add_bits(X6, Y6, Z6),
    add_bits(X5, Y5, Z5),
    add_bits(X4, Y4, Z4),
    add_bits(X3, Y3, Z3),
    add_bits(X2, Y2, Z2),
    add_bits(X1, Y1, Z1).

In contrast, if we had represented bytes as lists of bits, then we could easily have solved the problem via standard recursive problem solving, and in the process also obtaining a much more general program applicable to arbitrary sequences of bits.

add_byte([], [], []) --> state(C,C).
add_byte([X|Xs], [Y|Ys], [Z|Zs]) -->
    add_byte(Xs, Ys, Zs),
    add_bits(X, Y, Z).

This is the problem with the byte/8 representation in a nutshell: while it’s in principle possible to write general predicates it’s in practice much easier to just hardwire everything. For an additional example, assume that we want to write a right-shift operation where we shift in according to carry in. Such a rule could very easily be defined by:

shift_right(byte(B1,B2,B3,B4,B5,B6,B7,B8), byte(C0,B1,B2,B3,B4,B5,B6,B7)) --> state(C0,B8).

Which almost feels like cheating, but if all we want to do is to shift a given byte, there’s really no reason to write a general purpose program. Before we turn to integer arithmetic I’m going to present one more simplification which is going to make it simpler to define larger operations. While defining operations on bytes (addition, subtraction, complement, and so on) is not terribly difficult, it’s a bit cumbersome to chain together several byte expressions without functional notation. There’s no general support for functional notation in Prolog, but it’s possible to create a context where a term is going to be interpreted as a function, similar to how it’s possible to use arithmetical expressions in the context of is/2. Thus, we’re going to implement a rule ev//2 which takes a byte expression (written in a redable way by using standard operators) in its first argument, and evaluates the expression (in the context of a carry out/in transition) in its second argument.

%~ is defined as a unary operator.
ev(~Exp, Res) -->
    ev(Exp, Res0),
    neg_byte(Res0, Res).
ev(Exp1 - Exp2, Res) -->
    ev(Exp2,  Byte2),
    ev(Exp1,  Byte1),
    sub_byte(Byte1, Byte2, Res).
ev(Exp1 + Exp2, Res) -->
    ev(Exp1, Byte1),
    ev(Exp2, Byte2),
    add_byte(Byte1, Byte2, Res).
ev(Exp1 >> Exp2, Res) -->
    ev(Exp1, Byte1),
    ev(Exp2, Byte2),
    shift_right(Byte1, Byte2, Res).

It’s of course very easy to extend ev//2 with additional operators as necessary. I also implemented the two constants b1 (short for byte 1) and b0 as:

ev(b1, byte(0,0,0,0,0,0,0,1)) --> state(C,C).
ev(b0, byte(0,0,0,0,0,0,0,0)) --> state(C,C).

But it would also be nice to define an ‘assignment’ operator which we can use in infix notation. To accomplish this we begin by defining an operator which is going to unify its left argument with the result of evaluating a byte expression (using ev//2). For no particular reason I choose ‘$=’; mainly because it’s not already in use.

:- op(990, xfy, $=).

X $= Y --> ev(Y,X).

Put together this makes it possible to write expressions in a much more readable way than using add_byte and sub_byte manually. For example, given a byte Byte, we could define a rule which defines the two’s complement of a byte.

two_complement(Byte, ByteC) -->
    ByteC $= ~Byte + b1.

This also makes it possible to define sub_byte//3 in a nice way: simply compute the two’s complement of the second byte and then use byte addition.

Integers

Following the BASIC-80 reference manual we’ll represent integers as two bytes: int([High, Low]) where High is the byte containing the 8 most significant bits, and dually for Low. We can now quite easily define e.g. integer addition as:

add_int(int([X1,X2]), int([Y1,Y2]), int([Z1,Z2])) -->
Z2 $= X2 + Y2,
Z1 $= X1 + Y1.

All the hard work involving byte arithmetic has finally paid off! Naturally, we could quite easily define other integer operations (subtraction, multiplication, and so on) as necessary. It would also be straightforward to increase the number of bytes and it would even be possible to write general predicates working with an arbitrary number of bytes. Thus, while I’m still a bit ashamed of my hardwired byte predicates, at least the top-level code can be written in a reasonably clean way.

Floating Point Numbers

With the approach in the previous section we can represent integers in the range −32768 to 32767. Not terribly impressive. While it’s not possible to discretely encode more objects with 16 bits, we can circumvent this limit if we shift priorities. For example, imagine that we only care about numbers of the form 2^n and store such numbers by encoding the exponent n. Then even with two lowly bytes we could store numbers larger than the number of atoms in the observable universe. The problem, of course, is that such a representation would be incredible imprecise, and a floating point representation is a nice middle ground where we in addition to an exponent store a significand s in a fixed number of bits so that a number is represented by a term s \cdot 2^{n}. The point of such a representation is that if we, say, add two large numbers, then we might not have enough bits to accurately add the significands of the two numbers, but we can get reasonable close by making sure that the exponents and the most significant bits in the significands are added correctly. These days floating point arithmetic is typically taken care of with dedicated hardware, but in the heydays of BASIC-80 floating point operations were quite often implemented as a software layer in the interpreter.

Let’s now have a look at how we can add support for floating point operations. Warning: this should not be viewed as a complete implementation but rather as a proof of concept, and I’m only going to superficially describe how two add two positive floating point numbers. Otherwise this entry would be even more delayed than it already is. So let’s turn to the technical details: BASIC-80 uses Microsoft’s binary format (MSB) in either single or double precision, where a number is represented by the product of a significand and an exponent (both represented in binary). A single precision floating point number is then represented as a sequence of 4 bytes B1, B2, B3, B4 where:

  • B1 is the exponent (-127,…, -1 are represented by 1…127 and 0…127 are represented in the range 128…255).
  • The first bit in B2 is the sign bit (0 is positive and 1 is negative) and the remaining 7 bits are the start of the significand.
  • B3 and B4 are the rest of the significand (thus, 23 bits in total).

Representing this in Prolog via our byte representation is of course very straightforward: a floating point number is simply a term float([B1,B2,B3,B4]) where B1,B2,B3,B4 are the bytes in question, and we already know everything there is to know about bytes. But how do we actually obtain a number in this representation? To make a long story short, one has to:

  • Convert a decimal fraction to a binary fraction with a fixed number of bits.
  • Convert the binary fraction to scientific notation by locating the radix point.
  • Convert the binary scientific notation to the single precision floating point number.

All of these steps are rather straightforward, albeit tedious, and my Prolog implementation does not really stand out in any particular way. Hence, I’m simply going to assume that all numbers have been converted to the internal byte representation in a suitable way. Next, to add two floating point numbers we:

  • Compare the exponents.
  • If they are equal, then add the significands, possibly also increasing the exponent if necessary.
  • If one exponent is larger than the other then shift the significand of the smaller number, add the significands, and then shift back.

Thus, the first step is easy, assuming a predicate compare_byte/3 which takes two bytes and returns the order between them (=, <, or >). Please don’t ask how it’s defined (I didn’t hard code it by 16 distinct cases, or did I?).

add_float(float([E1, X1, Y1, Z1]), float([E2, X2, Y2, Z2]), F3) -->
    {compare_byte(E1, E2, Order)},
    add_float0(Order, float([E1, X1, Y1, Z1]), float([E2, X2, Y2, Z2]), F3).

Let’s consider the case where the two exponents are equal. The only complicating factor is that the first two bits in X1 and X2 are not part of the significand, but represents the sign of the number, which we for the moment assume is 0 (i.e., two positive numbers). But if we simply add the significands then the carry is going to get ‘stuck’ in the sign bit, so as a workaround we’ll set the two sign bits to 1, and then add the bytes, starting with the least significant ones.

add_float0(=, float([E1, X1, Y1, Z1]), float([E1, X2, Y2, Z2]), float([E3, X4, Y4, Z4])) -->
    set_bit(X1, b1, 1, NewX1),
    set_bit(X2, b1, 1, NewX2),
    Z3 $= Z1 + Z2,
    Y3 $= Y1 + Y2,
    X3 $= NewX1 + NewX2,
    %Also adds the carry from the previous operation. Hence, the exponent will increase.
    E3 $= E1 + b0,
    %The exponent increased so the significand has to be shifted right.
    X4 $= X3 >> b1,
    Y4 $= Y3 >> b1,
    Z4 $= Z3 >> b1.

The case when one of the exponents is larger than the other is quite similar and we just have to shift the significand of the smaller number appropriately. It’s then in principle not difficult to add support for more floating point operations, but it’s of course a non-trivial amount of work which is not suitable material for a blog entry.

The end?

Is there a moral to all of this? Not really: if we want to, then we can simulate low level concepts of programming languages in Prolog, and although it feels a bit weird, it’s not really that much work than in other programming languages. In the unlikely case that there’s a reader who’s interested in delving deeper in similar topics: thread carefully, and take the easy way out and represent bytes as lists instead of flat terms.

I’ve managed to cover more or less everything that I wanted in these four entries, and missing features are either very hard to implement (e.g., implementing support for assembly subroutines) or similar to existing concepts, and thus not terribly interesting. However, I don’t want to leave the safe confinement of the 80’s computer industry just yet, so I plan to write one additional entry, with a secret topic. Hint: it’s time for some alternate history: what if Bill Gates and Paul Allen felt the heat of the fifth generation computer project and in a flash of panick created a bizarre Frankenstein between Prolog and BASIC?

Writing a BASIC Interpreter — Part 3

Alternative title: how I was forced to write a malloc procedure in Prolog

Introduction

We now have a rudimentary but more or less functional BASIC interpreter in place which we’ll now attempt to extend so that it can handle a more meaningful language. In this entry we’re first going to implement an under-the-hood improvement which is going to make the interpreter a bit cleaner and easier to maintain. Then we’ll have a look at two major missing features: multi-dimensional arrays and functions. As a teaser for the next entry we’re also going to add a simple type system so that we can distinguish between floating point numbers and integers.

Implicit threading of state via definite clause grammars

Recall that the main idea behind the interpreter is that each predicate changes the internal state of the interpreter by passing around Comp objects. Hence, pretty much every predicate of the interpreter has the form:

p(Comp0, …, Comp) :-
    p1(Comp0, …, Comp1),
    p2(Comp1, …, Comp2),
    .
    .
    .
    pn(Compn-1, …, Comp).

Meaning that the predicate p describes the updated state Comp from Comp0 via the sequence of transitions induced by p1, …, pn. For example, we implemented the LET command as follows.

interpret_statement(Comp0, let(id(I), E), Comp) :-
    eval_exp(Comp0, E, V),
    set_mem(Comp0, id(I), V, Comp).

There’s nothing inherently wrong with this approach: we see clearly that eval_exp/3 evaluates the expression using Comp0, but is not allowed to change the state of the interpreter, and that set_mem/4 changes the state from Comp0 to Comp. But imagine that we’d like to add error handling to the interpreter, which could be implemented by augmenting the state attribute of the Comp object, currently only ‘ok’ or ‘end’, to a new ‘error’ state. If we, for example, attempt to divide a number by zero, or index an array with an invalid parameter, then we should change the state to ‘error’, and possibly supply a suitable error message. But this is not currently possible in eval_exp/3 since it’s not allowed to change the state. Clearly, we could fix this by adding an additional output argument to eval_exp/3 which represents the new state, but is it really a good idea to make such a drastic change when the vast majority of rules defining expressions don’t care about the new state?

Thankfully, this is a rather frequently occuring problem in Prolog, so there’s a reasonable solution available. The basic idea is to assume that each predicate takes a Comp object as input and returns an updated Comp object as output, and if we make this assumption for every relevant predicate, we can simply hide these arguments. We could accomplish this by a macro, using term expansion, but it’s actually easier to use Prolog’s support for definite clause grammars (DCGs) for this purpose. This is a formalism for parsing where grammar rules are written using –> instead of :-, and where everything on the right-hand side describes the language in question (exactly how production rules in a context-free grammar works) and where we can use terms and arguments as we normally do in logic programming. Since this isn’t an entry on parsing I won’t go into further details, but simply state that we’re going to use DCG rules to implicitly pass around Comp objects instead of strings. With this approach we could handle LET statements as follows (assuming that eval_exp and set_mem have already been rewritten in this format).

interpret_statement(let(id(I), E)) -->
    eval_exp(E, V),
    set_mem(id(I), V).

To describe a basic operation, e.g., set_mem//2 (it’s common practice to denote a DCG rule p taking n arguments by p//n) we need to access the current state. In principle we could write these operations as ordinary predicates, explicitly writing out the arguments hidden away with the DCG notation, but there’s no strict standard among DCG implementations and it’s better to not assume a particular ordering among the arguments. To solve this I’m borrowing a trick from Marcus Triska’s the power of Prolog which introduces a primitive state//2 which can be used to access the current state and describe an updated state. It can be defined and used as follows (see here for an explanation).

state(S0, S), [S] --> [S0].

set_mem(Var, Val) -->
    state(Comp0, Comp),
    {set(Comp0.mem, Var, Val, NewMemory), Comp = Comp0.put([mem:NewMemory])}.

Note that everything inside the curly brackers are treated as ordinary Prolog code. Hence, from now on, whenever we are describing operations accessing or modifying Comp objects, we’ll use DCG notation, if we want to access a state we’ll use state//2, and if we want to use ordinary Prolog code we’ll wrap this inside curly brackets. Each rule of this form may then be understood as describing a sequence of transitions between Comp states, and if we don’t care about the states then we don’t even have to refer to them. For example, we may now define the case handling the addition operator when evaluating arithmetical expressions as follows.

eval_exp(E1 + E2, V) -->
    eval_exp(E1, V1),
    eval_exp(E2, V2),
    {eval_plus(V1, V2, V)}.

If we later on decide to implement error handling then it might for example be the case that eval_exp(E1, V1) changes the current state to ‘error’, which we’ll later detect in the topmost loop in interpret_line//1. For example, it could be defined along these lines.

interpret_line(ok) -->
    get_statement_and_increment_line(S),
    interpret_statement(S),
    get_status(Status),
    interpret_line(Status).
interpret_line(end) --> state(Comp, Comp).
interpret_line(error) --> … %write a suitable error message.

If you don’t like this usage of DCGs and this implicit threading of state then simply try to view it as a way of saving a few keystrokes and avoid having to write out the Comp0, Comp1, …, Comp transitions manually, as we did before.

Multidimensional arrays

We’re now going to have a stab at implementing arrays. When reading the reference manual I was actually a bit surprised to see that BASIC-80 not only supports 2-dimensional arrays, but in fact up to 255 dimensions. Not bad! It seems that the implementation for the UC-2200 watch only supports 5 dimensions but that’s honestly more than I expected as well, and probably more than I’ve ever used in an actual program. Hence, how to add support for arrays? The first thing to observe is that we cannot obtain O(1) read/write operations unless we enforce a (small) constant limit on the allowed number of elements. For example, assume that we only allow 4 elements in each array. Then we could implement a set/4 operation as follows.

.
.
.
set(array(X1,X2,X3,X4,X5), 3, New, array(X1,X2,New,X4,X5)).
.
.
.

Since that’s clearly not desirable, and since we don’t want to use impure features such as destructive assignment unless there’s a very good reason for doing so, we’ll settle with O(log n) read and write operations instead, provided by e.g. library(assoc). A list would in all likelihood also be an acceptable choice but wouldn’t really be that much easier to implement. But let’s now turn to the implementation. Arrays are defined and used in the following way.

10 DIM a(10)
20 FOR i = 1 TO 10 STEP 1
30 LET a(i) = a(i-1) + i
40 NEXT i

Hence, the DIM statement at line 10 informs the interpreter that ‘a’ is an array variable with 10 elements, each of which is initially set to 0. Invididual elements can be accessed and modified in the usual way, arithmetical expressions can be used as a subscript, and if the subscript is not within the given dimensions an error is reported. Unfortunately:

  • indexing start at 0 but can be changed with the OPTION BASE statement, and
  • the DIM statement is optional and if it’s omitted the interpreter will allocate 10 elements for the array variable in question.

Both these design decisions are incredibly baffling. I might be able to excuse the first one, but the second one is completely nonsensical. Sometimes it’s possible to understand the design decisions behind BASIC simply by understanding the constraints at the time, e.g., dynamic NEXT statements is not a nice language feature, but is a natural consequence if the program is interpreted line by line. But there’s no excuse at all for the second decision. Why, exactly, is the default size 10? What’s the use case for implicitly allocating an array with 10 elements? If there’s no matching DIM statement then it seems likely that the programmer either forgot, or misspelled a variable, and this kind of default behaviour simply makes it harder to locate the bug. Moreover, this only works for single-dimensional arrays, and implementing this is more work than simply reporting an error. But we’ll worry about implementing this later. First, let’s concentrate on how a multi-dimensional array should be represented. We have a couple of options.

  • Associate each dimension with an ordered tree, so that that e.g. a 2-dimensional array would be a tree of trees, and where the nodes of the second layer of trees would store the actual elements (similarly to how one could use a list of lists to represent a 2-dimensional array).
  • A multi-dimensional array with n dimensions is represented by an ordered tree with keys over n dimensions. Then an array variable simply points to such a tree.
  • Store each element in the tree above directly in the main memory, without introducing an additional tree.

The third option is actually quite a bit easier and requires only modest changes. Each array element is then going to be stored by its multi-dimensional index, which we’ll simply represent by a list. Hence, to refer to an element of an array all we need is the identifier of the array variable and a list of integers, which we’ll wrap inside an array/2 term. For example, if we have an array ‘a’ with two rows and three columns, then we’ll refer to its elements by array(a, [0,0]), array(a, [0,1]), …, array(a, [1,2]), all of which will be stored directly in the main memory. Then we need to support assignment to arrays within LET statements and evaluating an array expression within an expression.

interpret_statement(let(array(I,Es), E)) -->
    %We assume a predicate which evaluates a list of expressions using eval_exp//2.
    eval_exps(Es, Vs),
    eval_exp(E, V),
    set_mem(array(I,Vs), V).

Note that we define the operation via a DCG rule as described earlier. Hence, we simply evaluate the list of dimensions (e.g., maybe the array is indexed by a loop variable), evaluate the right hand side, and write the array term to memory. Evaluating array expressions is then easy.

eval_exp(array(I, Ds), V) -->
    eval_exps(Ds, Vs),
    get_mem(array(I, Vs), V).

But what about the DIM statement? The simplest approach is to do nothing and ignore the statement.

interpret_statement(dim(_I, _Ds)) --> state(Comp, Comp).

This actually works, and only took a couple of lines of code to implement. Unfortunately, there’s a problem with the above code: it’s too general! With this implementation arrays don’t have a fixed size and can be resized arbitrarily. This is, in principle, good, but takes us a bit too far away from BASIC-80. Hence, we have no other choice but to artificially degrade our implementation. We have two options:

  • Store the identifier of each array together with its dimension in the main memory. Each time an element is accessed we check whether the given bounds are valid. Hence, we only store as many elements as needed.
  • Initially, create one element in the tree for each (multi-dimensional) index of the array. If an invalid index is used then the get_mem operation is simply going to fail (which in the future could be extended to report an error).

The second option makes it slightly easier to implement the additional condition that arrays initially should have their elements set to 0, so that’s the one that I’m going to choose, but naturally I feel slightly disturbed by this design choice. Writing a memory allocation procedure in Prolog? Really? Fortunately it’s not terribly complicated. We’ll introduce a rule malloc//2 which takes an identifier and a list of dimensions and describes the Comp object obtained from inserting an element for each index within the given dimensions. There are several ways to solve this but a straightforward solution is to define an auxilliary predicate which, given a list of dimensions, returns a list of integers within these bounds. For example, if the list of dimensions is [2,4] then we should first return [0,0], then [0,1], …, [1,3], and afterwards we’ll simply use findall/3 to find all answers. To ensure that our predicates are internally consistent we’ll also wrap each integer in this list inside an int/1 term. After having constructed this list we’ll simply walk through it and for each index create a new memory entry and set it to 0. Hence, we obtain the following.

malloc(I, Ds) -->
    {findall(array(I,X), integers(Ds, X), Ids)},
    malloc0(Ids).

malloc0([]) --> state(Comp).
malloc0([Id|Ids]) -->
    set_mem(Id, int(0)),
    malloc0(Ids).

integers([], []).
integers([int(D)|Ds], [int(X)|Xs]) :-
    D0 is D - 1,
    %Between is a bult-in predicate.
    between(0,D0,X),
    integers(Ds, Xs).

We can now handle DIM statements as follows.

interpret_statement(dim(id(I), Ds)) -->
    malloc(I, Ds).

The only remaining change is then to ensure that a LET statement stays within the original memory. I solved this by introducing a new basic predicate set_existing_mem//2 which fails if the first argument is not already present (which we have ensured by allocating the memory from the original DIM statement).

interpret_statement(let(array(I,Es), E)) -->
    eval_exps(Es, Vs),
    eval_exp(E, V),
    set_existing_mem(array(I,Vs), V).

Interestingly, we have now inadvertently introduced the possibility of memory leaks! Once an array has been allocated there’s no way to remove it. But since we’re working with a completely flat memory hierarchy this is not going to be a problem. Note that I in the end decided to not support the default behaviour of allocating an array with 10 elements if there’s no matching DIM statement. It’s bad enough as it is.

Adding functions

BASIC-80 keeps delivering: it has support for functions with local variables. Amazing. Might it even be possible to use functions to achieve some level of structured programming? Absolutely not, they are incredibly restricted: a function can only take a single argument and its body consists of a single-line arithmetical expression. Hence, their only usage is really to reuse slightly larger expressions which you don’t want to retype multiple times. They are defined and used as follows.

10 DEF FN SUCC(X) =  X + 1
20 LET Y = FN SUCC(FN succ(1))
30 PRINT Y

This defines a function SUCC with one argument X (line 10) which is used two times in line 20, and is going to bind Y to 3. Before we try to implement this a few observations should be made.

  • The variable X is actually a local variable which can’t be referenced outside the function. If X occurs elsewhere its going to be outshadowed by the local variable.
  • Functions can refer to variables defined outside the function since all other variables have a global scope.
  • The semantics of DEF FN … at line 10 is to define the function. If this line is never interpreted then its not possible to call the function. Hence, it’s not possible to place function definitions in the end of the source file, unless we use GOTO statements to interpret the definitions, and then use a GOTO statement to go back. This cluncy behaviour is simply to make it possible to interpret the program line by line, without needing to scan the entire source file after function definitions.

As usual, we’re going to assume that the program has been parsed into a suitable format. Everything that we need to do in the DEF FN statement is then to write the name of the function to the main memory together with its variable and body, which we’ll wrap inside an fn/2 term.

interpret_statement(def_fn(Id, Arg, Body)) -->
    set_mem(Id, fn(Arg, Body)).

The moderately interesing aspect is then to evaluate functions. Conceptually, this is not so difficult. We already know the name of the function, its local variable, and its body, so all that we have to do is to (1) evaluate the argument, and (2) evaluate the body of the function with respect to this value. However, recall that the argument of the function is supposed to be local to the function. Hence, after evaluating the function call there should be no traces left in the main memory of the local variable, and if there exists a global variable with the same name then it should not be overwritten by the local variable. I considered two solutions:

  • Wrap local variables inside a special term so that they don’t collide with global variables. I.e., the local variable X in the BASIC program above might internally be represented by a term local(succ, x).
  • Extend the memory so that it supports multiple frames. When evaluating a function we’ll then push a new frame containing the local variable.

The first option pretty much means that we solve the problem when parsing: when parsing a DEF FN statement we’ll wrap the argument of the function in a special term, and when parsing the body of the function we’ll treat each occurence of the local variable in a special way. If we choose the second option then we’ll have to extend the basic operations, get_mem//2 and set_mem//2, so that they support multiple frames. It certainly feels like the first option is much simpler, and might thus be preferable, but feelings can be deceiving. I actually implemented both these strategies, starting with the first one, and it turned out to be more complicated and less robust than the second strategy. Moreover, the second option increases the flexibility of the interpreter and makes it much easier to support more powerful functions/procedures, if we so desire. Hence, that’s the option I’m going to present. Recall that the memory of the Comp object, Comp.mem, is represented by an ordered tree. To support multiple frames we’ll extend this to a list of ordered trees instead. In get_mem//2 we’ll then attempt to find the variable in question in the current frame, and if it fails then we’ll try the second frame, and so on. In set_mem//2 we’ll simply assign a variable a value with respect to the current frame (although it would be easy to search the other frames if the variable is not present).

get_mem(Var, Val) --> state(Comp), {search_mem(Comp.mem, Var, Val)}.

%Auxilliary predicate.
search_mem([Frame|Frames], Var, Val) :-
    (   get(Frame, Var, Val) ->
    true
    ;
    search_mem(Frames, Var, Val)
    ).

set_mem(Var, Val) -->
    state(Comp0, Comp),
    {Comp0.mem = [First|Rest],
    set(First, Var, Val, NewMemory),
    Comp = Comp0.put([mem:[NewMemory|Rest]])}.

I’m not a huge fan of the if-then-else construct -> in Prolog since it’s easy to misuse and leads to complicated, typically nested definitions, which are impossible to understand for anyone except the original programmer. But in search_mem/3 it’s exactly what we want to express: if the variable can be found in the current frame then we’re done, otherwise we continue searching. To solve this without if-then-else we’d either have to use negation as finite failure, which is implemented so poorly in Prolog that it’s depressing to even acknowledge its existence, or write an additional predicate which given an ordered tree and a key, returns either the value in question, or a special null value. Then we could branch upon the return value and decide whether to continue searching or not. But this is hardly worth the effort.

Last, we implement push_frame//2, which is used to push an empty frame, and the dual operation pop_frame which removes the current frame, which we can then use to evaluate function applications in the context of arithmetical expressions.

push_frame -->
    state(Comp0, Comp),
    {empty_assoc(NewFrame),
    NewMem = [NewFrame|Comp0.mem],
    Comp = Comp0.put([mem:NewMem])}.

pop_frame -->
    state(Comp0, Comp),
    {Comp0.mem = [_First|Rest],
    Comp = Comp0.put([mem:Rest])}.

eval_exp(fn(Id, Arg), V) -->
    eval_exp(Arg, ArgVal),
    get_mem(Id, fn(Var, Body)),
    push_frame,
    set_mem(Var, ArgVal),
    eval_exp(Body, V),
    pop_frame.

Hence, although it initially felt like adding support for multiple frames would require large changes in the interpreter, it turned out that we only needed to change/add a couple of lines of code.

Adding preliminary support for floating point numbers

At this stage the interpreter supports quite a lot of features but we have (to be precise: I have, I take full responsibility!) made a large simplifying assumption: all numbers are assumed to be integers, represented via int/1 terms, and they are evaluated via Prolog by using is/2. Naturally, we want to support floating point numbers as well. However, in the 8k version of BASIC-80 it’s not possible to declare a variable as an integer; in the extended version this can be done by suffixing the variable name in question by a percentage sign, e.g.:

LET X% = 1
LET Y = 1

would declare X% to be an integer and Y to be a (single precision) floating point number. But this is not possible in the 8k version so all numeric variables are assumed to be (single precision) floating point numbers. However, I’m unsure whether loop variables used in FOR NEXT loops should then be of integer or floating point type. If we read the reference manual literally then they should be floating point, but this would incur a huge performance penalty since many machines at the time didn’t even have hardware support for floating point numbers. Thus, if I’d have to guess, the reference manual is inconsistent, and loop variables are internally represented by integers even in the 8k version, but please let me know if you have a definite answer.

Adding support for floating point numbers is then not terribly difficult. We represent floating point numbers by terms of the form float(N) where N is the number in question. During parsing numbers will be identified as either integers or floating point numbers depending on the expressions in question, e.g., if we write LET X = then 1.0 will treated as a floating point number and will internally be represented by float(1.0). We then modify the evaluation of arithmetical expressions accordingly. Here I’m presenting how addition is handled but the other cases can be handled in a very similar way.

eval_exp(int(Exp), int(Exp)) --> state(Comp, Comp).

eval_exp(float(Exp), float(Exp)) --> state(Comp, Comp).

eval_exp(E1 + E2, V) -->
    eval_exp(E1, V1),
    eval_exp(E2, V2),
    {eval_binary_op(+, V1, V2, V)}.

%Both arguments are integers. No conversion necessary.
eval_binary_op(Operand, int(V1), int(V2), Int) :-
    eval_binary_int_op(Operand, int(V1), int(V2), Int).

%Both arguments are floats. No conversion necessary.
eval_binary_op(Operand, float(V1), float(V2), Float) :-
    eval_binary_float_op(Operand, float(V1), float(V2), Float).

eval_binary_int_op(+, int(V1), int(V2), int(V)) :- V is V1 + V2.
eval_binary_float_op(+, float(V1), float(V2), float(V)) :- V is V1 + V2.

Then we simply add additional cases to eval_binary_op/4 to fill in the missing cases. E.g., if one argument is an integer and the other a floating point number, then the integer is converted to a floating point number. Note also that at the moment eval_binary_int_op/4 and eval_binary_float_op/4 do not differ in any meaningful way since we use is/2 in both cases. This, however, will change drastically soon!

Summary and outlook

We added several important features to the interpreter, and at this stage it can handle a quite large subset of BASIC-80. However, it works a bit too well for certain programs. For example, if we write a program computing the factorial of a given number then the interpreter is happily going to report that the factorial of 20 is 2432902008176640000, thanks to SWI Prolog’s nice support for big integers. This is clearly not something that an actual 8-bit computer such as the Z80 would be able to manage so we’re going to have a look at how one can accurately simulate low level arithmetic, including support for floating point numbers in the MSB format, in Prolog. I’m not going to give too much away at this stage but my goal is to implement these operations without using any built-in arithmetical operations. Hence, we’re pretty much going to have to invent a fictional ALU from scratch. Will Prolog be up for this task, or will it break under the pressure? You’ll have to wait and see until the next installment!

Writing a BASIC Interpreter — Part 2

Introduction

In the previous entry we designed a prototype interpreter which attempted to interpret a BASIC program by representing it as a list of statements, which simply evaluated the program by (1) interpreting the current statement, and (2) recursively interpreting the rest of the statements, and in the process always passing around the updated memory configuration. However, this strategy fell short when we tried to extend it to handle BASIC’s dynamic control structures, such as the GOTO statement. We encountered two major problems:

  • It’s not sufficent to represent the program by a list of statements since we (1) need line numbers, and (2) need the possibility of returning to an earlier part of the program.
  • It’s not correct to interpret a statement and then recursively interpret the rest of the statements, since the first statement migh have changed the line number, rendering the old list of statements obsolete.

The second problem also turned out to be an issue when interpreting FOR NEXT loops, where it’s also not correct to recursively evaluate the body of the for loop and then increment the loop variable, since we don’t know whether control will ever be returned to the head of the loop. For example, consider the following program:

10 LET X = 1
20 FOR I = 1 to 10 step 1
30 LET X = X + 1
40 GOTO 60
50 NEXT I
60 PRINT X
70 PRINT I

The result of interpreting this program should be to print 2, followed by 1, since the GOTO statement implies that the NEXT statement following the FOR loop will be skipped. Moreover, assume that we insert a NEXT statement at line 80.

10 LET X = 1
20 FOR I = 1 to 10 step 1
30 LET X = X + 1
40 GOTO 60
50 NEXT I
60 PRINT X
70 PRINT I
80 NEXT I

Then the new NEXT statement will dynamically match the FOR statement, and the resulting program will print X from 2 to 10, and I from 1 to 9. Our prototype interpreter failed to handle programs like this and it seemed to be rather hard to fix with a simple patch. However, on the bright side, we can currently handle:

  • LET statements.
  • Arithmetical expressions.
  • Boolean expressions.

Moreover, the interpreter is very easy to extend with new language constructs since one in principle only has to add a new case to the definition of interpret_statement/3. Hence, while we have to make some radical changes to the interpreter, we are going to try our hardest to not convolute the interpret_statement/3 more than necessary.

In BASIC’s Defence

Before we go back to the drawing board it might be a good idea to remind ourselves why BASIC has the form it has. Clearly, out-of-order NEXT statements is not really an intended language feature, but rather a side effect of how early BASIC interpreters were implemented.

  • A compiled language is out of the question due to limited memory and incredibly slow secondary storage.
  • The interpreter is likely going to be the only program shipped with the computer, so at the very least it has to support line editing.
  • Since the programmer enters the program line by line, things are greatly simplified if we assume that it’s possible to tokenise and parse each statement independently.

The third point explains the bizzare nature of FOR NEXT loops: by viewing FOR statements and NEXT statements as two separate statements, instead of a single control structure, there is no need to parse the entire program and attempt to match FOR statements with NEXT statements, or to attempt to determine the body of a FOR statement by explicitly searching for a NEXT statement, during the initialization of the loop. Instead, when a FOR statement is found, we simply push an identifier of the FOR loop to a stack (e.g., its line number and the loop variable), and when a NEXT statement is found we pop from the stack and return control to the FOR loop.

With these considerations in mind BASIC is a fairly good language. Many languages at the time, e.g. Pascal, were simply not fit for being interpreted, and certainly not in a memory efficient line-by-line fashion. Certain Lisp dialects might have been an option, but if I had a microcomputer with only a couple of kilobytes of RAM, the types of Lisp programs I could write would not really be that different from what I could achieve with BASIC. Moreover, the syntax and the language constructs of BASIC are intuitively understandable to a layman, and at the time, just having access to an advanced scientific calculator supporting multi dimensional arrays and simple but useful string processing, was revolutionary. I’d even say that picking up programming with BASIC in the 70’s and early 80’s was easier than starting with a high-level programming language today, where there are more layers and abstractions between the source file and what is visible on the display. The only really detrimental design decision is the lack of facilities for structured programming. Implementing e.g. procedures supporting a small number of local variables would not have been terribly difficult and could easily have been an optional feature for more powerful systems.

A Second Attempt

Since we now understand the idea behind BASIC a bit better, that programs should be interpreted line by line, it’s going to be much easier to design a reasonable interpreter. Previously our representation of the program was as a list of statements, and the current element in that list represented the current point of execution. Consider the following two proposals.

  • The program will be represented by an ordered tree where each node consists of a key (a line number) and a statement.
  • The interpreter will keep track of the current point of execution by storing the current line number.

These two changes implies that we during an iteration has to fetch the statement associated with the line number, increment the line number, interpret the statement, and recurse. For the moment, I’m going to brush aside any parsing issues and assume that our parser returns a list of tuples of the form Line:Statement where Line is a line number and Statement is a (parsed) statement. For example, consider the following program which computes the factorial of 5:

10 LET X = 1
20 FOR I = 1 TO 5 STEP 1
30 LET X = X*I
40 NEXT I

This would be represented by the list:

[int(10):let(id(x),int(1)),
int(20):for(id(i),int(1),int(5),int(1)),
int(30):let(id(x),id(x)*id(i)),
int(40):next(id(i))]

And before we’ll consider how the interpreter should be changed to handle this new representation, let’s see how it’s possible to convert a list of this form to an ordered tree. Why not use the above list directly? If the program has n lines then accessing a statement at a particular line takes O(n) time, which is quite bad since its something that the interpreter is going to do in each iteration. While I’m not concerned over low level efficiency, there’s no point in using suboptimal data structures when a more reasonable choice exists. But if I’m so concerned about data structures, why don’t I use an array indexed by line numbers, so that accessing a statement at a given line takes constant time? Well, mainly because we in that case first would have to normalize the line numbers, and since working with “flat” terms to simulate arrays is typically not a great experience in Prolog (it would, for example, be cumbersome if we want to extend the interpreter to support line editing). Also, log(n) time is still great : if we have a BASIC program with 1000 lines then we’re going to find the statement associated with a given line in roughly 10 steps, and if we double the program size to 2000 lines then the cost only increases to 11 steps, and so on. However, if there’s a reader with a huge BASIC program which he/she for some reason can’t run in any other interpreter, and for which the tree representation turns out to be too slow, then I promise to change the representation!

There is one catch with using the ordered tree in library(assoc), however. How do we find the next line? There is no basic operation which does this, so as a workaround we’re for each node in the tree going to store the key to the next line number. This can easily be accomplished by storing each node in the tree by a tuple Statement:NextLine where Statement is a statement, and NextLine is the key to the statement directly preceding Statement. But what should Next be when Statement is the last statement in the program? To handle this we introduce a special line number, ‘void’, which contains the statement ‘end’ and whose Next line simply points to itself. With this in mind we can easily write a predicate which given a list of numbered statements, creates the corresponding search tree, by standard recursive problem solving:

  • Base case: a list with only one statement. Then we want to create an empty tree, insert the statement in question, and point it to the special ‘void’ number discussed above.
  • Recursive case: a list with two or more statements where the first element is Num:Statement. Solve the problem recursively. Then we have a tree containing everything except the current statement. Furthermore, assume that the line number of the next statement is NextNum. Then we want insert a new node in the tree where the key is Num and where the data is Statement:NextNum.

Hence, we obtain the following.

statement_list_to_program([N:S], Tree) :-
	empty_assoc(Tree0),
	set(Tree0, N, S:int(void), Tree1),
	set(Tree1, int(void), end:int(void), Tree).
statement_list_to_program([N1:S1, N2:S2|Rest], Tree) :-
	statement_list_to_program([N2:S2|Rest], Tree0),
	set(Tree0, N1, S1:N2, Tree).

Naturally, we’ll now have to update our interpreter to reflect these changes, and instead of passing around objects of the form comp(Mem), we’ll pass around objects of the form comp(Mem, Program, Line) where Program is the tree representation of the program and Line is the current line number. However, before making this change let’s have a look at how the top-most loop of the interpreter would look like. Here, we assume the existence of a predicate which fetches statement associated with the current line number and updates the line number (of course, we have to implement this predicate later).

interpret_line(Comp0, Comp) :-
    get_command_and_increment_line(Comp0, Command, Comp1),
    interpret(Comp1, Command, Comp2),
    interpret_line(Comp2, Comp).

But what should the end condition be? Previously, we simply assumed that we had reached the end of the program when the current list of statements was empty. Since we don’t want to complicate the definition of interpret_line/2 more than necessary we’ll assume that the Comp object supports an operation get_status/2 such that get_status(Comp, Status) suceeds with Status = ‘ok’ or Status = ‘end’. If we in the future decide that additional states are necessary (e.g., perhaps we want to introduce a special error state) then we’ll augment get_status/2 in an appropriate way. We could then redefine interpret_line/2 as:

interpret_line(Comp0, Comp) :-
    get_status(Comp0, ok),
    get_command_and_increment_line(Comp0, Command, Comp1),
    interpret_statement(Comp1, Command, Comp2),
    interpret_line(Comp2, Comp).

interpret_line(Comp, Comp) :-
    get_status(Comp, end).

However, I prefer the following variant which attempts to use unification as early as possible to distinguish between the ‘ok’ state and the ‘end’ state:

interpret_line(Comp0, ok, Comp) :-
    get_statement_and_increment_line(Comp0, S, Comp1),
    interpret_statement(Comp1, S, Comp2),
    get_status(Comp2, Status),
    interpret_line(Comp2, Status, Comp).

interpret_line(Comp, end, Comp).

Why? We can now immediately see that the two cases are mutually exclusive simply by inspecting the heads of the two rules, instead of having to understand how get_status/2 is used in the bodies. Hence, at this stage, the Comp objects needs to keep track of (1) the current line number, (2) the program, represented by an ordered tree storing line numbers and statements, and (3) the current status. Recall that we had the earlier definitions:

init_comp(comp(M)) :-
    empty_mem(M).

set_mem(comp(M0), Var, Val, comp(M)) :- 
    set(M0, Var, Val, M).
get_mem(comp(M), Var, Val) :- 
    get(M, Var, Val).

These now have to be augmented whenever we add additional arguments to the computer object.

%Initialises the computer with an empty memory and a supplied program and a start line.
init_comp(Program, StartLine, comp(M, Program, StartLine, ok)) :-
    empty_mem(M).

%Recall from the previous entry that set/4 was defined using %put_assoc/4 from library(assoc).
set_mem(comp(M0, P, L, S), Var, Val, comp(M, P, L, S)) :- set(M0, Var, Val, M).
get_mem(comp(M, P, L, S), Var, Val) :- get(M, Var, Val).

There’s of course a problem with this representation of the computer object: every time we change something we have to change all basic operations, even the operations which have nothing to do with the other components of the object. This is not too bad if the comp object is expected to be fairly static, and if we have only a few number of operations to change, but this can be difficult to predict. SWI-Prolog contains a very handy data structure for this purpose: the dictionary data structure. This data structure supports functional notation which initially might look rather weird, but is quite convenient. Consider the following example:

Comp = comp{mem:Memory, program:Program, status:ok}, Y = Comp.status.

This is going to succeed and bind Y to ‘ok’. If we want to update the status to ‘end’ we could write:

Comp1 = Comp.put([status:end]).

And the expression Comp.put([status:end]) could even appear in the head of a rule, making it possible to define the aforementioned basic operations as follows.

init_comp(Program, StartLine, comp{mem:M, stack:S, program:Program, line:Start, status:ok}) :-
    empty_mem(M).

set_mem(Comp0, Var, Val, Comp0.put([mem:NewMemory])) :-
    set(Comp0.mem, Var, Val, NewMemory).
    get_mem(Comp, Var, Val) :- get(Comp.mem, Var, Val).

Similarly, we can easily define a predicate which given a line number Line returns the tuple Statement:NextLine corresponding to the line number Line, which we can use to define the aformentioned predicate get_statement_and_increment_line/3.

get_statement_and_increment_line(Comp0, Command, Comp0.put([line:NewLine])) :-
    get(Comp0.program, Comp0.line, Command:NewLine).

GOTO Statements at Last!

Let’s now see how we can make use of the new basic operations of the interpreter to define the GOTO statement. In the current definition of interpret_line/3 the interpreter fetches a command and increases the line number in the beginning of the iteration. Hence, all that we have to do is to change the current line number to the number indicated by the GOTO statement.


interpret_statement(Comp0, goto(Label), Comp) :-
    set_line(Comp0, Label, Comp).

set_line(Comp0, Line, Comp0.put([line:Line])).

Great success! Note also that we didn’t have to make any changes to e.g. the old definitions of interpret_statement/3 handling LET and PRINT instructions, despite drastically changing the internal structure of the interpreter state. Equipped with this new confidence let’s now try to handle the troublesome FOR NEXT loops. Importantly, by the earlier observations we now know that FOR and NEXT statements should actually be viewed as two independent statements, rather than as a single statement.

  • When interpreting a FOR statement we should evaluate the start value, the end value, assign the loop variable the start value, evaluate the loop condition, and then continue evaluating the statement directly following the FOR loop.
  • When interpreting a NEXT statement we should increment the loop variable and then return control to the FOR loop.

But how should FOR and NEXT statements communicate with each other? The easiest solution is to use a stack: before the FOR loop statement is finished we should push a suitable identifier to the stack (e.g., the line of the FOR loop, and the loop variable) so that once a matching NEXT statement is found we simply pop from the stack and proceed accordingly. Hence, let’s add a stack to the comp object, represented by a list, and supporting the two stereotypical push and pop operations.

%We have to change init so that comp now contains a stack, initialised with the empty list.
init_comp(Program, StartLine, comp{mem:M, stack:[], program:Program, line:StartLine, status:ok}) :-
    empty_mem(M).

%We push to the stack by adding the element E as the head.
push(Comp0, E, Comp0.put([stack:[E|Comp0.stack]])).

%We pop from the stack by removing the head of the list.
pop(Comp0, E, Comp0.put([stack:Stack])) :-
    [E|Stack] = Comp0.stack.

There is only one problem: what happens if the FOR loop condition is false already during initialisation? Should we (1) go to the statement directly following the FOR statement, (2) go to the statement directly following the NEXT statement, or (3) silently fail? The BASIC-80 reference manual claims that the second case should happen, but this is clearly impossible since we don’t know where the matching NEXT statement is. For all we know, maybe the programmer, tired after a long week of hardship, messed up and accidentally hid the NEXT statement so well so that it requires us to solve the halting problem for Turing machines. While one could certainly attempt to find a matching NEXT statement by scanning the remaining program line by line, there’s little point in doing these extra steps. Hence, we’ll go for option (1) instead, leading to the following definition of interpret_statement/3.

interpret_statement(Comp0, for(Id, Start, End, Step), Comp) :-
    eval_exp(Comp0, Start, StartVal),
    set_mem(Comp0, Id, StartVal, Comp1),
    interpret_for(1, Comp1, for(Id, Start, End, Step), void, Comp).

interpret_for(0, Comp0, for(_, _, _, _), LineAfterNext, Comp) :-
    interpret_statement(Comp0, goto(LineAfterNext), Comp).

interpret_for(1, Comp0, for(Id, Start, End, Step), _LineAfterNext, Comp) :-
    get_line(Comp0, ForLine),
    push(Comp0, ForLine:for(Id, Start, End, Step), Comp).

Recall that interpret_for, now with an additional argument representing the line number after the (future) NEXT statement, consists of a base case in which the loop condition is false, and the case when the execution of the loop should continue. What happens in the above code is (1) that interpret_statement/3 goes directly to the loop case of interpret_for/4, and since we do not know the line number of the Next statement we simply send the nonsense value ‘void’, (2) when the loop is finished we continue execution at the line LineAfterNext, and (3) in the loop case of interpret_for/4 we push the current line to the stack, together with the identifier, start, end, and step. But how does interpret_for/3 ever get access to the line number LineAfterNext? That’s the job of the NEXT statement, which we’ll now define.

interpret_statement(Comp0, next(Id), Comp) :-
    pop(Comp0, ForLine:for(Id, Start, End, Step), Comp1),
    eval_exp(Comp1, Id + Step, NewVal),
    set_mem(Comp1, Id, NewVal, Comp2),
    eval_bool(Comp2, Id < End, Res),
    get_line(Comp2, LineAfterNext),
    set_line(Comp2, ForLine, Comp3),
    interpret_for(Res, Comp3, for(Id, Start, End, Step), LineAfterNext, Comp).

Hence, once a NEXT statement is reached we pop a FOR statement from the stack (from line ForLine), update the loop variable, evaluate the Boolean expression, get the line LineAfterNext, set the line number to ForLine, and branch to interpret_for/4. Note that once the base case of interpret_for/4 occurs the number LineAfterNext will be known.

Implementing IF GOTO, GOSUB, and RETURN

The interpreter is currently very modular and easy to extend with new language features. Let’s first consider IF GOTO statements. They have the form:

IF COND THEN GOTO LINE

where COND is a Boolean condition and LINE a line number. IF THEN ELSE statements are only in the disk-based version of BASIC-80 so we’ll omit that feature (although it wouldn’t be hard to implement).

interpret_statement(Comp0, if(B, GotoLine), Comp) :-
    eval_bool(Comp0, B, Res),
    interpret_if(Res, Comp0, GotoLine, Comp).

interpret_if(0, Comp, _, Comp).
interpret_if(1, Comp0, GotoLine, Comp) :-
    interpret_statement(Comp0, goto(GotoLine), Comp).

Thus, we simply evaluate the Boolean expression and either do nothing, in which execution will pick up at the line following the IF GOTO statement, or goto the line GotoLine. We can similarly implement GOSUB statements:

GOSUB LINE
.
.
.
RETURN

GOSUB can be used to implement a poor-man’s version of subroutines. When a GOSUB statement is encountered we’ll jump to LINE, and once a RETURN statement is encountered we’ll jump back to the line following the original GOSUB statement. Hence, the advantage of GOSUB compared to merely using GOTO is that it’s easier to use if the subroutine is called from several distinct places across the program, and its one of the very few features of BASIC which actually facilitates some form of structured programming.

interpret_statement(Comp0, gosub(Label), Comp) :-
    get_line(Comp0, Line),
    push(Comp0, Line, Comp1),
    interpret_statement(Comp1, goto(Label), Comp).

interpret_statement(Comp0, return, Comp) :-
    pop(Comp0, Line, Comp1),
    set_line(Comp1, Line, Comp).

Easy! Note that the stack is shared between GOSUB and FOR statements. This is going to work great as long as everything goes according to the plan, but’ll fail if we (e.g.) use GOTO within a subroutine or in a FOR NEXT loop, without going back. This could be handled by replacing the pop operation by a find operation which searches through the stack after a matching statement (but in my opinion we would actually make the interpreter worse by supporting this).

Summary

We threw away most of the prototype interpreter and made a fresh start. The internal representation of the interpreter state, the comp object, had to be extended quite a bit, but once these changes had been made it turned out to be simple to define new cases of interpret_statement/3. At the moment we still only support a small subset of BASIC-80, and in future entries we’ll see how the interpreter can be extended. Here are a few items on my todo list:

  • Implementing functions.
  • Implementing (multi-dimensional) arrays.
  • Implementing integer and floating point arithmetic from scratch, without even using Prolog’s predefined arithmetical operations (wait, what!?).
  • Implementing strings.

Resources

Writing a BASIC interpreter — Part 1

Introduction

We’re going to begin our journey (or rather, our descend into madness) by identifying a suitable subtask where we can very rapidly implement a solution. Why? In this stage fully sketching a solution in a top-down manner, breaking everything into subtasks, is going to be very difficult since we are not sufficiently familiar with the problem to correctly identify all problem areas. Instead, we’ll try to choose a reasonable subset of BASIC which we can implement quickly, and which is non-trivial enough so that a solution to the full problem can use the solutions from the prototype. In the best-case scenario the prototype is simple and implement but scalable enough so that we essentially only have to extend it, case by case, to solve the large problem, but in practice that rarely happens. Hence, as long as we realise that the prototype is incomplete, and that we might have to throw it away and start from scratch (while hopefully learning something in the process), this method can be quite useful.

BASIC

Let’s do some simple BASIC programming. Consider the following program which computes the factorial of X and prints the result.

10 LET X = 5
15 LET Z = 1
20 FOR I = 1 TO X STEP 1
30 LET Z = Z * I
40 NEXT I
50 PRINT Z

Even if you’ve never touched BASIC before it should be possible to figure out how the above program works. We begin by initialising X and Y to 5 and 1, respectively. In the for loop we initalize I to 1, in each iteration we update the vaule of Z, and we abort when I is larger than X, increasing it by 1 in each iteration. Last, we print Z. In addition, each line is labelled by a line number, but since the above program does not contain any goto statements these serve no purpose at the moment. From the above program we see that we need to be able to:

  • Represent variables, e.g., by a data structure which maps identifiers to values.
  • Assign values to variables.
  • Get values of variables.
  • Compute arithmetical expressions.
  • Iterate through a for-loop.
  • Evaluate Boolean expressions (later on, we want to support if expressions, and already now we need to be able to determine whether the loop variable is smaller than the given upper bound).
  • Print an expression.

This sounds like sufficient material for the prototype implementation. Later on, we’ll worry about implementing more features, but even the above program shows that we have to do something non-trivial, since we essentially have to simulate destructive assignment in a suitable way. For example, in the for loop we clearly need to increase the value of I in each iteration. But writing I = 1, I = 2 in Prolog is going to fail since there is no I which is simultaneously both 1 and 2.

A simple interpreter

We begin by making a few simplifying asumptions. We’ll assume that the program is given in a suitable format, e.g., by a list where each item correponds to a parse tree representation of the BASIC statement on a line of the program. This is very simple to do in Prolog: the let statement on line 10 could for example be represented by the term let(x, 5), the let statement on line 30 by the term let(z, z*i), and the for loop could be represented by the term for(i, 1, x, 1, [let(z, z*i)]). Note that we do not need to explicitly store the next statement corresponding to the for loop, but rather just treat it signifying the end of the body of the for loop (howewer, we’ll return to this issue later). In this representation it is important that x,y, and z, are written with lowercase letters, since otherwise Prolog would treat them as logical variables, and as we have already seen that this is not a good representation. Then the entire program could be represented by the list:

[let(x, 5), let(z, 1), for(i, 1, x, 1, [let(z, z*i)]), print(z)]

For example, try writing

[X|Xs] = [let(x, 5), let(z, 1), for(i, 1, x, 1, [let(z, z*i)]), print(z)]

in Prolog’s query window. What happens? We’re going to get the answer that X is let(x,5), and that Xs is the list [let(z, 1), for(i, 1, x, 1, [let(z, z*i)]), print(z)]. Note, however, that e.g. let(x,5) at the moment has no other meaning than describing a term whose functor is let, and with two arguments, the constant x and the number 5. Hence, it’s just data, and would in other languages correspond to defining a struct with two elements, but has the advantage that we can define them on the fly, and that the intended meaning of the data is typically clear if we use good names. Similarly, for(i, 1, x, 1, [let(z, z*i)]) is simply a term with 5 arguments, where the 5th argument is a list containing the term let(z, z*i). Note that z*i is just a convenient way to write *(z,i) since * is a predefined operator. If this turns out to be a good representation, then we’ll keep it, and make sure that the parser that we write later outputs a list of this form. Hence, we can rather quickly try out whether this representation seems reasonable by writing an interpreter with respect to this format.

Our first challenge is to figure out how to represent variables. We’ll assume that there is only one, global scope, and that the state of all variables therefore can be represented by a suitable data structure supporting get and set operations. This could, for example, be implemented by a list, but it’s easier and better to use a balanced tree for this purpose. Thankfully, the module library(assoc) is exactly what we are looking for. With the help of this datastructure we begin by defining the following operations.

empty_mem(M) :- empty_assoc(M).

set(M0, Var, Val, M) :- put_assoc(Var, M0, Val, M).
get(M, Var, Val) :- get_assoc(Var, M, Val).

The only point behind empty_mem/1, set/4, and get/3, rather than using the corresponding operations from library(assoc) directly, is to make it easier to change the representation later on, if we so desire. From now on, as a convention, whenever we’re defining a predicate which takes a state and changes it, we’re going to let the input argument be the fist argument of the predicate (e.g., M0 in set/4) and let the updated state be the last argument (e.g., M in set/4). At the top level query window we could then try out the query:

empty_mem(M0), set(M0, x, 5, M1), set(M1, z, 1, M).

The resulting tree M is then going to be a tree where the node with key x contains the value 5, and the node with key z contains the value 1, which seems like a very reasonable representation. In every step of the iteration we are then going to pass around a tree of this form which represents the current state of the interpretation. With this insight we then define a predicate interpret/3 where the first argument is the current memory configuration, the second argument a list of statements, and the third argument the result of interpeting the list of statements with respect to the memory from the first argument. The top level loop would then have the form:

interpret(Mem, [], Mem).
interpret(Mem0, [S|Ss], Mem) :-
	interpret_statement(Mem0, S, Mem1),
	interpret(Mem1, Ss, Mem).

Where the second argument is a list of statements of the form described above. Hence, the recursive case of interpret/3 can be read as: if Mem1 is the result of interpreting the statement S with respect to Mem0, and if Mem is the result of interpreting Ss with respect to Mem1, then the result of interpreting [S|Ss] with respect to Mem0 is Mem. Procedurelly, what we are doing is to (1) get the first statement that should be interpreted, call it S, interpreting S and get a new memory Mem1, and then (2) recursively interpreting the tail of the list Ss, using Mem1 as the current memory. This is a good starting point, but before we turn to the problem of correctly defining interpret_statement, we’re going to do one additional abstraction. What if we later on figure out that we need to store more information in each step of the iteration? Maybe we need a stack? Maybe we want to pass around an output stream? Then we’d have to add each such item as an argument to interpret, and we might possibly also have to change every occurence of interpret_statement/3, which could be cumbersome since its going to have a large number of possible cases (roughly, one case for each language construct). Hence, it’s better to devise a new data structure which contains everything we need in order to interpret the program. At the moment, we’re just passing around the memory, so as a placeholder we’ll wrap it inside a unary term comp/1, so that the current state of the interpreter is represented by comp(Mem), where Mem is as before. Again, the name comp (short for computer) is not important, it’s just data. Later on, where we (e.g.) might add a stack, we’ll add an additional argument to the term, and obtain a term of the form comp(Mem, Stack), where Stack would be a suitable representation of stack. Naturally, we still have to rewrite the program to encompass this, but this is going to be rather manageable. Thus, our current interpreter has the following form.

:- use_module(library(assoc)).

empty_mem(M) :- empty_assoc(M).

init_comp(comp(M)) :-
	empty_mem(M).

set(M0, Var, Val, M) :- put_assoc(Var, M0, Val, M).
get(M, Var, Val) :- get_assoc(Var, M, Val). 

set_mem(comp(M0), Var, Val, comp(M)) :- set(M0, Var, Val, M).
get_mem(comp(M), Var, Val) :- get(M, Var, Val).

interpret(Comp, [], Comp).
interpret(Comp0, [S|Ss], Comp) :-
	interpret_statement(Comp0, S, Comp1),
	interpret(Comp1, Ss, Comp).

Hence, when we want to change the value of a variable, we’d use set_mem/4, rather than set/4, since the former hides the internal representation (this will hopefully be clearer in just a moment). We begin by considering set statements. We have to (1) evaluate the expression on the right-hand side, and (2) assign the resulting value to the variable on the left-hand side, so that’s exactly what we do:

interpret_statement(Comp0, let(I, E), Comp) :-
        eval_exp(Comp0, E, V),
        set_mem(Comp0, I, V, Comp).

Clearly, we now have to define eval_exp/3, which can be defined by classical recursive problem solving. The base case (the simplest kind of expression) occurs when we have either a number or a variable. It might be tempting to define these two cases as follows:

eval_exp(_Mem, Exp, Exp).
eval_exp(Comp, I, V) :- get_mem(Comp, I, V).

(_Mem is an anonymous variable, indicated by starting the variable name with an underscore, to inform the Prolog system that it does not occur in any other place) However, this is not correct, since the the first fact states that the result of evaluating any expression Exp, is the expression itself. The case handling identifiers works, since if I is not a valid identifier, then get_mem(Comp, I, V) is simply going to fail, and Prolog will try another matching rule, but it’s certainly not the best way to handle this. It might be tempting to introduce tests akin to the following:

eval_exp(_Mem, Exp, Exp) :- integer(X).
eval_exp(Comp, I, V) :- atom(I), get_mem(Comp, I, V).

But this is also not a good solution, for two reasons. First, what if we want to add support for additional types, e.g., floating point numbers, or if atom/1 is too permissive (for example, maybe we want to enforce that variable identifiers can only consist of two characters). Second, and more severely, consider a query of the form eval_exp(Mem, x + 1, Result). Even though the term x+1 (again, this is just a convenient way to write the term +(x,1)) clearly cannot match against the two base cases, the Prolog system is not smart enough to realise this. Furthermore, while a query of the form eval_exp(_, 5, Result) will work as expected, the Prolog system is still going to create choice points so that it has the possibility of trying the remaining cases of eval_exp/3 as well. Naturally, we shouldn’t take such low level considerations too seriously at this stage, but it turns out that there’s a solution which (1) represents the intent of the base cases much clearer, and (2) resolves these efficiency issues. It’s also rather simple: we change the representation of numbers and identifiers. We’ll represent the number 5 by wrapping it inside a term int(5), and represent a variable identifier x by the term id(x). Then we can immediately say whether an expression is an integer, an identifier, or a more complicated expression, simply by comparing it to one of these cases. Again, this is just data, there’s nothing magical or strange going on. If we later on decide to support floating point numbers as well we could e.g. represent them by terms of the form float(0.5), and if we decide to support arrays we could wrap those identifiers into similar terms as well. Then the above two cases could be written as:

eval_exp(_Mem, int(Exp), Exp).
eval_exp(Comp, id(I), V) :- get_mem(Comp, id(I), V).

Now it’s easy to see that the two cases are mutually disjoint simply by glancing through the code. The recursive cases are then written in a rather uniform way:

eval_exp(Comp, E1 + E2, V) :-
        eval_exp(Comp, E1, V1),
        eval_exp(Comp, E2, V2),
	eval_plus(V1, V2, V).

eval_exp(Comp, E1 * E2, V) :-
        eval_exp(Comp, E1, V1),
        eval_exp(Comp, E2, V2),
	eval_mult(V1, V2, V).

%Add more functions later.

Here, eval_plus/3 and eval_mult/3 are just placeholders, since we haven’t decided how arithmetic should be evaluated yet. For example, do we want to simulate a fixed-bit CPU? Do we allow both signed and unsigned numbers? Such considerations are important, but they’ll have to wait until later where we have a more meaningful interpreter in place, and for the moment we simply define them using Prolog’s built-in support for arithmetic (is/2).

%Placeholder definitions.
eval_plus(V1, V2, V) :- V is V1 + V2.
eval_mult(V1, V2, V) :- V is V1 * V2.

All that we need to interpret the factorial program is to correctly interpret for loops. When encountering a for loop we then need to:

  • Evaluate the start expression.
  • Set the value of the loop variable to this value.
  • Evaluate the loop condition (true or false) and branch on this outcome.
  • If the condition turned out to be false, we stop, if it’s true, we update the value of the loop variable, interpret the body of the loop, and start anew with evaluating the loop condition.

There are several ways to accomplish this but the cleanest way (in my opinion) is to make sure that the predicate responsible for evaluating Boolean expression returns 0, false, or 1, true, and then introduce an auxillary predicate which correctly handles these two branches.

interpret_statement(Comp0, for(Id, Start, End, Step, Body), Comp) :-
	eval_exp(Comp0, Start, StartVal),
	set_mem(Comp0, Id, StartVal, Comp1),
        %We now assume that Res is either 0 or 1.
        eval_bool(Comp1, Id < End, Res),
	interpret_statement_for(Res, Comp1, for(Id, Start, End, Step, Body), Comp).

interpret_statement_for(0, Comp, for(_,_,_, _, _), Comp).
interpret_statement_for(1, Comp0, for(Id, Start, End, Step, Body), Comp) :-
	eval_exp(Comp0, Id+Step, NewValue),
	set_mem(Comp0, Id, NewValue, Comp1),
	interpret(Comp1, Body, Comp2),
        eval_bool(Comp2, Id < End, Res),
	interpret_statement_for(Res, Comp2, for(Id, Start, End, Step, Body), Comp).

An alternative solution would be to use the if-then-else construct in Prolog, but I dislike this construct since it (1) is non-logical, and (2) makes it harder to divide the program into small, understandable chunks. For example, in the above program one can immediately understand the 0 case of interpret_statement_for/4 without understanding the 1 case. If I see a rule containing an if-then-else statement then it’s harder to immediately see what the base case is, and what the recursive case is. However, there are certainly cases where if-then-else is better and more convenient than introducing an auxilliary branching predicate, so I’m not going to be dogmatic about its usage. Last, we need to define eval_bool/3 so that we have the ability to evaluate Boolean expressions. This turns out to be rather similar to evaluating arithmetical expressions, and to make the code more succinct we use the built-in predicate compare/3 which compares two given terms and returns the result of the comparison (, or =). For example, compare(Order, 0, 1) succeeds with Order bound to ‘<‘.

eval_bool(Comp, E1 &lt; E2, Result) :-
        eval_exp(Comp, E1, V1),
        eval_exp(Comp, E2, V2),
	compare(Order, V1, V2),
	eval_less_than(Order, Result).

eval_less_than(<, 1).
eval_less_than(=, 0).
eval_less_than(>, 0).

Naturally, we’ll add support for more logical operators when we need them. Before trying to interpret our example program we’ll add a simple definition of the print statement. This definition is likely going to change later on, since we might want to simulate an LCD display, but for the moment this is better than nothing.

interpret_statement(Comp, print(X), Comp) :-
	eval_exp(Comp, X, V),
	write(V), nl.

If we then try out the query:

P = [let(id(x), int(5)), let(id(z), int(1)), for(id(i), int(1), id(x), int(1), [let(id(z), id(z)*id(i))]), print(id(z))], init_comp(Comp0), interpret(Comp0, P, Comp).

We get the expected result that the factorial of 5 is 120. While the above program is far from complete, and likely has to be rewritten several times, we’ve still made some nice observations. First, the key idea behind the interpreter is to pass around a term representing the state of the interpreter, and allow the basic statements of the language to make changes to this state. For the moment we have a simple, high-level representation of the memory, but if we later on decide to change this representation to e.g. a term of fixed size, then we’ll be able to make these changes simply by modifying set_mem/4, get_mem/3, and empty_mem/1. Second, it turned out to be quite nice to be able to define interpret_statement/3 with one rule for each language construct, since it’s possible to immediately understand the meaning of a language construct without needing to understand how the rest of the interpreter is implemented. Hence, regardless of how the implementation changes we’re going to try our best to not convolute this simple structure.

Adding support for the GOTO statement

Swelling with confidence and pride over our fantastic prototype we now turn to one of BASICs most characteristic features: the GOTO statement. The syntax is simple enough:

GOTO LINE

where LINE is a line number. Let’s construct a simple test program and put it to action.

10 GOTO 20
15 LET X = 0
20 LET X = 1
30 PRINT X

In our current representation we don’t have line numbers, so that’s something that we have to implement before we even start thinking about how the GOTO statement should be implemented. One may imagine that we represented the above program using a list of the form:

[int(10):goto(int(20)), int(15):let(id(x), int(0)), int(20):let(id(x), int(1)), int(30):print(id(x))]

where : is a predefined operator, so each element in the above list is simply a tuple consisting of a line number and a statement. Imagine that the definition of interpret/3 doesn’t change, so that interpret_statement/3 now receives a tuple int(Line):Statement, where Line is a line number, and Statement a statement. Unfortunately, this means that we would now have to change every single occurence of interpret_statement/3 to encompass this change. For example, we’d have to change the rule handling the let statement into something like:

interpret_statement(Comp0, _Num:let(I, E), Comp) :-
        eval_exp(Comp0, E, V),
        set_mem(Comp0, I, V, Comp).

Which doesn’t feel like the best solution since the let statement doesn’t care about the current line number. But if this turns out to be a good solution then maybe that’s a sacrifice we could live with. So imagine that we attempted to define the rule handling the GOTO statement into something like:

interpret_statement(Comp0, _Numgoto:goto(Line), Comp) :-
        ???
	interpret(Comp0, ???, Comp).

Clearly, we need to fetch the statement at Line, but this is actually impossible in the current representation since interpret_statement/3 was defined with respect to an individual statement, and not the entire program. What to do? It might be tempting to attempt to change the definition of interpret/3 so that it sends the entire program to interpret_statement:

interpret(Comp, [], Comp).
interpret(Comp0, [S|Ss], Comp) :-
	interpret_statement(Comp0, S, Ss, Comp1),
	interpret(Comp1, Ss, Comp).

Then we’d have to change every single case of interpret_statement/3 and add an additional argument for the list of statements Ss. Again, this is not that nice, but perhaps we could live with it if turned out that this was the most elegant solution. So we’d now implement interpret_statement/4 as:

interpret_statement(Comp0, _Numgoto:goto(Line), Ss, Comp) :-
        find_program_at_line(Line, Ss, Ss1),
	interpret_statement(Comp0, Ss1, Comp).

Where we assume that find_program_at_line/3 takes the line number Line in question, the list of statements Ss, goes through this list until it finds a statement matching Line, and returns the program starting at that line in the third argument. Unfortunately, this doesn’t work, either. We actually have two severe problems. First, consider a program:

10 PRINT 1
20 GOTO 10

We would represent this by the list [int(10):print(int(1)), int(20):goto(int(10))]. But in the application of interpret_statement(Comp0, int(20):goto(int(10)), [], Comp) the list of statements in the third argument is simply the empty list, since there is nothing after the GOTO statement in the program. The problem, of course, is that we’re not able to go back in a single-linked list, once we have gone forwards. This could be fixed by adding an additional argument to comp/1, so that the interpreter state is represented by comp(Mem, P), where P is a list of statements. Hence, in every iteration, we’d always have access to the original program. However, there’s a second, more pressing issue. What would our interpreter answer for the query:

  P = [int(10):goto(int(20)), int(15):let(id(x), int(0)), int(20):let(id(x), int(1)), int(30):print(id(x))],
  init_comp(Comp0),
  interpret(Comp0, P, Comp).

? In the first iteration we would have S = int(10):goto(int(20)), and Ss = int(15):let(id(x), int(0)), int(20):let(id(x), int(1)), int(30):print(id(x)). We would then begin with the application interpret_statement(Comp0, S, Ss, Comp), which would find [int(20):let(id(x), int(1)), int(30):print(id(x))], and recursively interpret the program at that point, resulting in state where x is 1. So far so good. But then we’d jump back to interpret(Comp1, Ss, Comp), which would continue interpreting the program starting with int(15):let(id(x), int(0)), changing x to 0. The problem, of course, is that interpret/3 is no longer correct, since (1) interpreting S, and then (2) recursively interpreting Ss is only valid if we didn’t encounter a GOTO statement. This is not good, and in fact this is only the tip of the iceberg. Imagine a for loop containing a GOTO statement:

10 FOR I = 1 TO 10 step 1
15 GOTO 25
20 NEXT I
25 PRINT I

Then we have a very similar problem in the definition of interpret_statement_for/4 which is going to attempt to recursively interpret its body (in this case consisting only of the GOTO statement), and then evaluate the loop condition. In fact, when scrutinising further, it turns out that our treatment of for loops, viewing the NEXT statement as simply indicating the end of the body of the for loop, is not correct, either, in the presence of GOTO statements. While the reference manual for BASIC-80 pretends that a FOR … NEXT loop should be viewed as a single langauge construct, the NEXT component of a for loop should actually be viewed as an independent statement, with the semantics of increasing the value of the loop variable and then returning control to the first matching for loop. In fact, we should even allow a program akin to the following.

1 GOTO 10
2 NEXT I
3 GOTO 20
10 FOR I = 1 TO 10 step 1
15 GOTO 2
20 PRINT I

We would then jump to 10, begin the FOR loop, jump to the NEXT statement at line 2, increase the value of I, jump to 15, and then jump back to 2. This is going to be repeated until I is 10, and execution then continues at line 3 since its the first line after the NEXT statement. Ouch. So it doesn’t really make sense to view a FOR loop as having a “body”, since it’s something that can actually dynamically change from iteration to iteration. Hence, when interpreting a for loop, we don’t know where the corresponding NEXT statement might be, and it’s impossible to statically determine this (in fact, undecidable). The main problem is that we tried to enforce a simple, more modern, control structure, which didn’t take GOTO statements in account. While we would certainly obtain a better language by simply forbidding programs of the above form, we wouldn’t really be interpreting BASIC anymore. So, at this stage we have no other choice than to go back to the drawing board.

Next time

We developed a prototype for a subset of BASIC which initially seemed to be rather promising, but which on closer inspection wasn’t capabable of handling the dynamic control structure of the langauge. How it will be resolved will be left as a cliffhanger until the next installment!

A Return to Form?

It feels a bit weird typing this entry almost a decade after the last major entry. So instead of inventing additional excuses I’m simply going to cut right to the chase: I’ve been watching a lot of episodes of computer chronicles lately and was struck with a sudden burst of nostalgia, reminiscing the glory days of the personal computer industry (which I wasn’t even part of!), celebrating odd gadgets, micro computers, obsolete programming languages, and weird hairdos. This episode, in particular, made me long for the Seiko UC-2000 watch, a wrist watch computer which through a docking station is capable of running user provided BASIC programs! What’s not to love about this? Sadly, the watch itself is rather expensive these days, which immediately struck down my dream of writing the best UC-2000 game of all time. I therefore opted for the second best option: implementing a BASIC interpreter in Prolog with capabilities similar to a BASIC-80 interpreter running on a Z80 CPU with only a couple of kilobytes of RAM. I started to look around for similar projects and it seems that I’m not unique in my vision. Far from it. Maybe there simply comes a time in ones life when the call of the wilderness is too strong to resist, and one simply has to implement a BASIC interpreter from scratch. However, I was unable to find an interpreter written in Prolog, so my project at least fills that niche. This is certainly not due to inabilities of Prolog, in fact, writing interpreters in Prolog is easier than in most other programming languages, but rather because sane people strive to get away from BASIC, instead of returning to it.

A brief introduction to logic programming with Prolog

Since there’s been a while since the last entry I suppose it can’t hurt to repeat some fundamental concepts, before we turn to the main task. Logic programming represents a different programming paradigm compared to standard imperative programming languages. There are several equivalent characterizations:

  • (Database description) Think of a logic program as an extension of a relational database. Hence, we can define basic relations between objects, and can perform certain fundamental operations on these relations (join, intersection, and so on). However, in a logic program we’re in addition allowed to define relations using implications/rules of the form: p \gets p_1, \ldots, p_m, where p_1, \ldots, p_m is treated as a conjunction. Implications of this form should be interpreted as: p is true if p_1, \ldots, p_m are true. For example, if we have a database defining a \mathrm{child} relation between two individuals, so that \mathrm{child}(a,b) holds if b is a child to a, then we could define the \mathrm{grandparent} relationship between two individuals as: \mathrm{grandparent}(X,Z) \gets \mathrm{child}(X,Y), \mathrm{child}(Y,Z). By convention, variables are written with uppercase letters, and are universally quantified when they appear in the head of a rule, and existentially quantified otherwise. Hence, the previous rule should be read as: X is a grandparent of Z if there exists an individual Y such that Y is a child to X, and Z is a child to Y. In addition, rules are allowed to be recursive, and variables are allowed to range over compound expressions, terms, rather than just simple constants. Hence, our program describes relations, and we can then query this program similarly to how we would query a relational database.
  • (Logical description) A logic program consists of a set of axioms of the form p \gets p_1, \ldots, p_m, where we view simple statements of the form p as a shorthand for p \gets \mathrm{true}. The meaning of a logic program is the set of all logical consequences of the program. Hence, we encode our problems as logic, and solutions correspond to logical consequences.
  • (Operational description) When we’re defining a rule p \gets p_1, \ldots, p_m, it can sometimes be convenient to view as defining a “procedure”. If we “call” this procedure via a query p, then we are going to enter the body of p, p_1, \ldots, p_m, and evaluate each p_i before returning control.

The programming language Prolog is then a simple but efficient logic programming language where queries are answered in a top-down fashion using a backtracking search, and using a simple ASCII syntax where backward implications \gets are written as :-. For example, assume that we want to define a relation which we can use to check whether an element occurs in a list (Prolog has built-in syntax for lists where an expression of the form [X|Rest] means that [X|Rest] is a list where the first element is X, and where the tail of the list is Rest). The usual definition of this program is:

member(X, [X|Rest]).
member(X, [Y|Rest]) :- member(X, Rest).

The logical reading of this program is as follows: X is an element of the list [X|Rest], and if X is a member of Rest, then it also a member of the list [Y|Rest], regardless of what Y is. In the operational reading of this program we may interpret it as: the base case of the recursion happens when we have found the element in question, i.e., when X and the first element of the list are equal. This is concisely expressed by \mathrm{member}(X, [X|Rest]), rather than the more cumbersome: \mathrm{member}(X, [Y|Rest]) :- X = Y. In the recursive case we remove the first element from the list, call it Y, and continue searching in the rest of the list. The logical, declarative reading of a program is simpler, but the operational reading is important, too, since it is closer to how Prolog behaves. However, understanding programs only on an operational level can be misleading, since then we may struggle to understand how a query of the form

member(a, L).

where L is a variable, could have (multiple) answers.

Goals and limitations

  • The project will be written in SWI-Prolog, but any SWI-specific code
    will be suitably encapsulated.
  • Most of the code will be pure, i.e., no negation, no if-then-else, no cut, and so on, but I’ll happily use standard data structures instead of implementing them from scratch.
  • The implementation will not be intented to provide a genuine UC-2000 experience. It’s just for fun, and I’ll typically strive for simplicity rather than authenticity, although I plan to implement some restrictions (e.g., fixed-bit arithmetic).
  • I’ll implement a subset of BASIC-80 as described in the reference manual and this unofficial manual. Sadly, I think that the official reference manual is only available in Japanese. If a certain feature is not worth the implementation effort I’ll simply omit it.
  • I’ll mainly concentrate on the interpreting step and’ll typically assume that I’m given a tokenized and parsed program in a suitable representation. The latter two steps are very easy to implement in Prolog and the details do not differ in any meaningful sense compared to parsing other programming languages. However, if anyone wants access to these tools, I’ll distribute them.

Next time

In the forthcoming entry I’ll start with something simple: a prototype interpreter for a small subset of BASIC.

Prologomenon is Taking a Hiatus

As you’ve probably noticed by now, the frequency of updates have been rather low during the past months. And by rather low, I mean close to zero. And by close to zero, I mean zero. This stems from the fact that my ongoing master’s thesis (structural restrictions of a certain class of “easy” but NP-complete constraint satisfaction problems) has nothing to do with logic programming. Hence I just don’t have the motivation or mental energy to simultaneously update the blog.

But fret not. I have every intention to keep the blog running once things have calmed down a bit. And if any of my learned readers have suggestions for upcoming topics I’m all ears. Just shoot me an email or write a comment.

Meta-Programming in Prolog – Part 2

Here is the story thus far: a meta-program is a program that takes another program as input or output. Based on this idea we wrote an interpreter for a simple logic programming language and later extended it to build a proof tree. A proof of concept, if you will. Meta-interpreters have lost a lot of steam in the last years. The reason being that they are just too hard to write in most popular programming languages. There’s no a priori reason that prevents us from writing a meta-interpreter in e.g. Python or Java, but the truth is that it’s such a lot of work that it’s not worth the trouble in most cases. The only exception that I can think of are integrated development environments which typically have at least some semantic awareness of the object language. But these languages doesn’t have a simple core and makes parsing awkward to say the least. In logic programming the situation is different. If an interpreter supports definite Horn clauses — facts and rules — and built-in operations it’s powerful enough to run quite a lot of real programs.

So what’s the purpose then? Is meta-programming just a sterile, academic exercise that has no place in real world software development? Since that was a rhetorical question, the answer is no. A resounding no! First, meta-interpreters are great for experimenting with new language features and implementation techniques. For instance we could ask ourself if it would be worthwhile to add support for new search rules in Prolog instead of defaulting to a simple depth-first search. Implementing a new search rule in a meta-interpreter can be done in a few hours, and the resulting program won’t be longer than perhaps a page of code (unless you screwed up, that is). Doing the same task in an imperative programming environment could take days or even weeks depending on the complexity of the existing code base. So meta-programming is useful for prototyping. What else? It can actually be a great aid in debugging. In the following sections we’re going to explain what debugging means in logic programming and develop a simple but functional system for squashing bugs.

Algorithmic debugging

Assume that we have a logic program P and a goal query \leftarrow G. Sterling and Shapiro cites three possible bugs in The Art of Prolog:

  1. The interpreter could fail to terminate.
  2. The interpreter could return a false solution G\theta. (incorrectness)
  3. The interpreter could fail to return a true solution G\theta. (insufficiency)

Since the first problem is undecidable in general we shall focus on the latter two. But first we need to decide what the words true and false means in this context, and in order to do that some remarks about the semantics of logic programs have to be made. If you’re feeling a bit rusty, I urge you to read up a bit on Herbrand models. Wikipedia and my own earlier post are both good starting points. The basic idea is fortunately rather simple. Logic formulas and programs can be viewed as specifications of models. A model is an interpretation in which the program is true. In general there are many, infinitely many, models of any given definite logic program. Which one should we choose? In a model we are free to reinterpret the non-logical vocabulary in any way we see fit. Consider the following logic program:

natural(zero).
natural(s(X)) \leftarrow natural(X).

It can be seen as a specification of either the set \{natural(0), natural(1), \ldots\} or the set \{natural(zero), natural(s(zero)), \ldots \}. Notice the subtle difference. The latter model is simpler in the sense that it doesn’t take us outside the domain of the textual representation of the program itself. Such models are known as Herbrand models. Could we be so lucky that Herbrand models are the only kind of models that we need to pay attention to? This is indeed the case. If a logic program has a model then it also has a Herbrand model. But we still need to pick and choose between the infinitely many Herbrand models. The intuition is that a model of a logic program shouldn’t say more than it have to. Hence we choose the smallest Herbrand model as the meaning of a logic program. Or, put more succinct, the intersection of all Herbrand models. For a logic program P, let M_P denote the smallest Herbrand model of P.

This is good news since we now know that every well-formed logic program has a meaning. Let’s return to the question of false solutions. This notion is only relevant if the programmer has an intended meaning that differs from the actual meaning of the program. In all but the most trivial programming tasks this happens all the time. An intended meaning I_P of a logic program P is the set of ground goals for which the program should succeed. Note the “should”. If we briefly return to natural/1, the intended meaning is nothing else than the actual meaning, i.e. the set \{natural(zero), natural(s(zero)), \ldots \}. With this terminology it’s possible to give a precise definition of incorrectness and insufficiency of a logic program P:

  1. P is incorrect iff M_P \not\subseteq I_P.
  2. P is insufficient iff I_P \not\subseteq M_P.

With these definitions we see that the natural/1 program is neither incorrect nor insufficient. But let’s introduce some bugs in it:

natural1(\_).
natural1(s(X)) \leftarrow natural1(X).

natural2(zero).
natural2(s(X)) \leftarrow natural2(s(X)).

Can you spot them? natural1/1 is incorrect since the base clause is too inclusive. M_P is not a subset of I_P since e.g. the element natural(-1) is not a member of I_P. In the same vein, natural2/1 is insufficient since it’s equivalent to just natural2(zero).

Quite a lot of legwork to explain something which is actually rather simple! What remains is to put everything in practice. Due to space constraints we’ll focus on the incorrectness problem.

Incorrectness

A logic program P is incorrect if it gives solutions that are not included in the intended model. In a real-world situation this means that the programmer has found a goal which the program should reject, but it doesn’t, and hence it contains at least one bug. The purpose is to find the part in the program that is responsible for the bug. In logic programming terms this is of course a clause. A clause A \leftarrow B is false iff B is true and A is false. The purpose of the algorithm is to traverse the proof tree and find such a clause. With this in mind we can at least write the top-level predicate:

   false_solution(Goal, Clause) :-
       %Build a proof tree.
       interpreter::prove(Goal, Tree),
       %Find a false clause.
       false_goal(Tree, Clause).

Well, that wasn’t too hard. What about false\_goal/2? The tree is of the form A \leftarrow B. Hence there are two cases: either B is false or it’s true. If it’s false, then we must continue the search in B. If it’s true, then the current clause is the clause that we’re looking for. To determine whether B is false we need an auxiliary predicate, false\_conjunction/2, where the first argument is the conjunction of nodes and the second argument is the false clause (if it exists).

   false_goal((A :- B), Clause) :-
       (    false_conjunction(B, Clause) ->
            true
       ;    Clause = (A :- B1),
            %Necessary since we don't want the whole tree.
            extract_body(B, B1)
       ).

By the way, this is a fine example of top-down development. In each step we’re breaking the original problem into easier problems and assume that we’re able to solve them later. false\_conjunction/2 is a bit trickier. The first argument is a conjunction of nodes of the form A \leftarrow B. Just like before there are two cases since A is either false or true. If it’s true, then we move on to the rest of the nodes. If it’s false, then we’d like to know whether B is true or false. Luckily we’ve already solved this problem before — a call to false\_goal/2 will do the trick just fine.

   false_conjunction(((A :- B), _Bs), Clause) :-
       query_goal(A, false),
       !,
       false_goal((A :- B), Clause).
   %Almost the same case as above, but with only one element.
   false_conjunction((A :- B), Clause) :-
       query_goal(A, false),
       !,
       false_goal((A :- B), Clause).
   false_conjunction((_A, As), Clause) :-
       %A is implicitly true.
       false_conjunction(As, Clause).

Only the most perplexing predicate remains: query\_goal/2. The second argument is true if A is true and false if it’s false. How can we know this? This is where the programmer’s intended model enters the picture. For now, we’re just going to use her/him as an oracle and assume that all choices are correct. The predicate is then trivial to write:

   query_goal(G, Answer) :-
       %Change later.
       write('Is the goal '),
       write(G),
       write(, ' true?'),
       nl,
       read(Answer).

In essence the user will be asked a series of questions during a session with the program. Depending on the answers, i.e. the intended model, the program will dive deeper and deeper into the proof tree in order to find the troublesome clause. As an example, here’s an append program where the base case is wrong:

append([_X], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
    append(Xs, Ys, Zs).

And the session with the program would look like this:

[1]  ?- debugging::false_solution(append([a,b,c], [d,e], Xs), Clause).
Is the goal append([b,c],[d,e],[b,d,e]) true?
|: false.
Is the goal append([c],[d,e],[d,e]) true?
|: false.
Xs = [a, b, d, e],
Clause = (append([c], [d, e], [d, e]):-true)

And we clearly see that it’s the base case that’s wrong.

Summary

The algorithm was taken from The Art of Prolog. Some simplifying assumptions have been made. Among other things there’s currently no support for built-in operations. This is rather easy to fix, however. A more serious question is if it would be possible to minimize the role of the oracle, since it’s now queried every time a decision needs to be made. There are two techniques for coping with this. Either we do a smarter traversal of the proof tree with e.g. divide and conquer, or we find a way to approximate the intended model of the program without the use of an oracle.

Source code

The source code is available at https://gist.github.com/1351227.

Meta-Programming in Prolog – Part 1

Introduction

Meta-programming is part of the folklore in Prolog, and is in general a rather old concept with roots tracing back to at least the 50’s. To give a definition that captures all the relevant concepts is outside the scope of this introductory text, but I shall at least provide some pointers that’ll be useful later on. Programs are useful in many different domains. We might be working with numbers, with graphs, with lists or with any other data structure. What happens when the domain is another programming language? Well, nothing, really, from the computer’s point of view there’s no difference between this scenario and the former. But conceptually speaking we’re writing programs that are themselves working with programs. Hence the word “meta” in meta-programming. A compiler or interpreter is by this definition a meta-program. But in logic programming we’re usually referring to something more specific when we’re talking about meta-programming, namely programs that takes other logic programs as data. Since Prolog is a homoiconic language there’s also nothing that stops us from writing programs that takes other Prolog programs as data, but even though there’s a subtle distinction between this and the former scenario they are often referred to as one and the same. So, to summarize, when we’re talking about meta-programs in logic programming we’re quite often referring to Prolog programs that uses logic programs as data.

The road map for this post is to see some examples of meta-interpreters in Prolog. Then we’re going to use the interpreters to aid program development with a technique known as algorithmic debugging. But enough talk, let’s do this!

Meta-interpreters

There’s still ample room for confusion regarding the word “meta” in meta-interpreter. I shall use the word whenever I refer to an interpreter for a logic programming language, even though this is not factually correct since one usually demands that the object language and the meta language are one and the same. That is: we write an interpreter for Prolog in Prolog. There are good reasons for not doing this. Prolog is a large and unwieldy language with many impure features such as cut, IO, assert/retract and so on, and when we’re working with meta-interpreters we’re often only interested in a small, declarative part of the language. Hence we shall restrict our focus to a programming language akin to pure Prolog which is basically just a set of Horn clauses/rules.

Even though we still haven’t decided the syntax for the object language we know that we must represent at least two things: facts and rules. Since a fact A is equivalent to the rule A \leftarrow true we can store these in the same manner. Assume that P is a definite logic program. How should we represent it? As a list or a search tree? This could be a good approach if we were interested in implementing dynamic predicates in a declarative way, but since P is static it’s much easier to just use the database and store everything as facts. For every rule A \leftarrow B_1, ..., B_n \in P, represent it as the fact rule(A, [B_1, ..., B_n]). If a rule only has the single atom true in its body, i.e. it is a fact, then the second argument is the empty list. Obviously this is just one of many possible representations, but it’s simple to implement and work with.

As an example, here’s how we would write append/3:

rule(append([], Ys, Ys), []).
rule(append([X|Xs], Ys, [X|Zs]),[append(Xs, Ys, Zs)]).

Simple, but not exactly pleasing to the eye. Fortunately it’s easy to add some syntactic sugar with the help of Prolog’s term expansion mechanism. Instead of directly using rule/2 we can rewrite append/3 as:

append([], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
    append(Xs, Ys, Zs).

And then define a suitable expansion object so that we end up with a set of rule/2 facts. This is a rather mundane and not very exciting programming task and hence omitted. Now on to the interpreter. It will be defined by a set of prove/1 clauses where the single argument is a list of goals. If you’ve never seen a meta-interpreter in Prolog before, you’re probably in for some serious disappointment since the program is so darn simple. So simple that a first reaction might be that it can’t possibly do anything useful. This first impression is wrong, however, since it’s easy to increase the granularity of the interpreter by implementing features instead of borrowing them from the Prolog system.

As mentioned the interpreter takes a list of goals as argument. This means that there’s a base case and a recursive case. In the base case of the empty list we are done. In the recursive case we have a list of the form [G|Gs] where G is the first goal that shall be proven. How do we prove G then? By looking if there’s a corresponding rule rule(A, [B_1, ..., B_n]) where A and G are unifiable with mgu \theta and recursively prove ([B_1, ..., B_n|Gs]) \theta. In almost any other language this would be considerable work, but since Prolog is a logic programming language we already know how to do unification. Thus we end up with:

%Initialize the goal list with G.
prove(G) :-
   prove1([G]).

prove1([]).
prove1([G|Gs]) :-
   rule(G, B),
   prove1(B),
   prove1(Gs).

This is a prime example of declarative programming. We’ve only described what it means for a conjunction of goals to be provable and left the rest to the Prolog system. If you’re unsure why or how the interpreter works I urge you to try it for yourself.

Extensions

To prove that I wasn’t lying before I shall illustrate some neat extensions to the bare-bone interpreter. Strictly speaking we don’t really need anything else since the language is already Turing complete. It’s e.g. trivial to define predicates that define and operate on the natural numbers. For example:

nat(zero) :- true.
nat(s(X)) :- nat(X).

add(zero, Y, Y) :- true.
add(s(X), Y, s(Z)) :-
   add(X, Y, Z).

But since these operations can be implemented much more efficiently on any practical machine it’s better to borrow the functionality. Hence we shall define a set of built-in predicates that are proved by simply executing them. The easiest way is to add a rule/2 definition for every built-in predicate.

rule(rule(A, B), []) :-
    rule(A, B).
rule((X is Y), []) :-
    X is Y.

Why the first clause? So that we can facilitate meta-programming and use rule/2 in our object language. I mentioned earlier that the interpreter as defined is not really a meta-interpreter in the strict sense of the word, and that Prolog is such a large language that writing meta-interpreters for it is probably not worth the hassle. But now we have a very restricted yet powerful language. Can we write a real meta-interpreter in that language? Yes! Actually it’s hardly any work at all since we already have the source code for the old interpreter.

prove(G) :-
    prove1([G]).

prove1([]) :- true.
prove1([G|Gs]) :-
    rule(G, B),
    prove1(B),
    prove1(Gs).

Glorious. Perhaps not very practical, but glorious.

Building a proof tree

When our interpreter gives an answer it doesn’t provide any indication as to why that answer was produced. Perhaps the answer is in fact wrong and we want to localize the part of the code that is responsible for the error. The first step in this process is to build a proof tree. A proof tree for a goal \leftarrow G and logic program P is a tree where 1) the root is labeled G, and  2) each node has a child for every subgoal with respect to P. Hence the proof tree is more or less a representation of a sequence of trace steps.

It might sound like a complex task, but it’s really not. All we need is to extend the prove/1 predicate with an additional argument for the proof tree. In the base case of the empty list the tree contains the single node true. If [G|Gs] are the current goals then we prove G and Gs and builds a proof tree from the recursive goals.

prove(G, T) :-
    prove1([G], T).

prove1([], true).
prove1([G|Gs], ((G :- T1), T2)) :-
    rule(G, B),
    prove1(B, T1),
    prove1(Gs, T2).

And when called with G = append([a,b], [c,d], Xs) the resulting tree looks like this:

?- interpreter::prove(append([a,b], [c,d], Xs), T).
Xs = [a, b, c, d],
T = ((append([a, b], [c, d], [a, b, c, d]):- (append([b], [c, d], [b, c, d]):- (append([], [c, d], [c, d]):-true), true), true), true)

NB: this tree has a lot of redundant true entries. How can we fix this?

Summary

We’re now able to build proof trees. In the next entry we’re going to use them to localize errors in logic programs.

For a good discussion of meta-interpreters in Prolog the reader should turn to The Craft of Prolog by Richard O’Keefe. This post was just the tip of the iceberg. Another interesting subject is to experiment with different search rules, and for this I shamelessly promote my own bachelor’s thesis which is available at http://www.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:325247.

Source code

The source code is available at https://gist.github.com/1330321.

Arcane Abuses of append

First I should point out that the following predicates hardly qualify as arcane, and they’re not really that abusive either. But they do use append, and one out of three isn’t so bad after all? append/3 is one of Prolog’s most useful predicates and is often one of the list predicates first taught to students. Beginners, and especially those familiar with other programming languages, sometimes have a hard time recognizing the multiple usages of the predicate however. Just for reference and to make sure that we’re on the same page, the usual definition goes like this:


   append([], Ys, Ys).
   append([X|Xs], Ys, [X|Zs]) :-
        append(Xs, Ys, Zs).

Nothing fanciful. Just a standard recursive predicate which holds if Zs is the list obtained when appending all the elements of Xs with all the elements of Ys. So when should we use this predicate? When we want to append two lists? No! Upon years of using Prolog I don’t think I’ve used append/3 for this purpose in a serious program even once. The reason being that difference lists are usually a much better choice in these instances since they can be appended in constant instead of linear time. So let’s try to figure out some other usages.

member(X, Xs) is true if X is a member of the list Xs. It’s of course not hard to write this as a recursive predicate as we did with append/3, but why bother if there’s an easier way? So let’s solve it with append/3 instead. Upon a first inspection it might not look like they have anything to do with each other. How can we find an element in a list by appending two lists? The answer is actually pretty simple. We know that we take a list, Xs, as argument. Can we find two other lists such that they give Xs when appended? Of course. Just call append/3 with Xs as the third argument. Remember that append/3 is a relation and not a function:

?- Xs = [a,b,c], append(A, B, Xs).
Xs = [a, b, c],
A = [],
B = [a, b, c] n
n
Xs = [a, b, c],
A = [a],
B = [b, c] n
n
Xs = [a, b, c],
A = [a, b],
B = [c] n
n
Xs = A, A = [a, b, c],
B = [] n
n
false.

That was the first step. Now let’s find an interpretation of membership that can be cast in terms of these three lists. How about this: X is a member of Xs if Xs can be divided into two parts, A and B, such that X comes between A and B. Put into code this is:

    member(X, Xs) :-
        append(_A, [X|_B], Xs).

Very easy once you know the trick, but difficult if one is afraid of using append/3 as a relation instead of a function. A similar problem is the sublist problem: given a list Xs, is Ys a sublist of Xs? Again it’s not hard to imagine how a recursive version would look, but perhaps we can find an easier solution with the help of append/3. A sublist is a continuous subsequence. This can be expressed in terms of three lists: Ys is sublist of Xs if there exists two lists, A and B, such that A appended with Ys and B results in Xs. That was quite a mouthful, but in essence it’s the same thing as we did with member/2 with the difference being that we’re looking for a list instead of a single element. Assume that we had the predicate append/4. Then sublist could be solved as:

sublist(Xs, Ys) :-
    append(_A, Ys, _B, Xs).

Alas, since we don’t have such a predicate we’re going to use append/3 two times instead. First Xs is divided into A and B. Then we find the sublist Ys by saying that Ys is a suffix of B.

    sublist(Xs, Ys) :-
        append(_A, B, Xs),
        append(_, Ys, B).

It should be noted that this solution gives rise to many duplicate answers. Why? Assume that Xs = [a,b]. Then the answer Ys = [b] can be found by first binding B to [a,b] and then Ys to the prefix [b] of this list. Or it can be found by binding B to [b] and then binding Ys to the prefix [b] of B. This is a bummer since we’re only interested in one of these answers. The implementation of an optimized version is left as an exercise to the reader.

select/3, last/2 and other basic list processing predicates can be implemented in essentially the same manner. As a last example we’re going to implement nth/3 with append/3 and length/2. nth(X, Xs, N) is true if X is the N:th member of Xs, starting from 0. One observation is enough to give us a solution: X is the N:th element of Xs if the number of elements preceding Xs is equal to N. This is easy to check with length/2:

    nth(X, Xs, N) :-
        append(A, [X|_], Xs),
        length(A, N).

A question to the observant reader: why is the order of the two goals in the body not swapped? Also, as a concluding remark: I’ve been told that it’s not always a good idea to do something just because you can. That might well be true. This version of nth/3 is rather inefficient and I would not recommend anyone to try it at home!

Delightful Summer Reading

Apologies for the lack of updates! In my defense, I’ve been rather busy in an attempt to finish the drafts of not only one, but two, novels. The first of these is about bananas, puns and slapstick humour while the second is heavily influenced by my interest in logic and incompleteness. Over the course of the summer I’ve also been doing a fair amount of reading. The two non-fiction books that I’m currently digging my teeth in are The World of Mathematics and Logic, Logic and Logic.

The World of Mathematics is a vast collection of essays (the Swedish edition, Sigma, consists of six volumes in total!) spanning topics such as biographies of the great thinkers, historical problems and also more recent investigations of the foundations of mathematics. Highly recommended, and as a bonus it looks great in the bookshelf.

Logic, Logic and Logic is a collection of articles by the prominent logician-philosopher George Boolos. The bulk of the text is dedicated to various papers on Frege, that among other things sheds light of the importance of his Begriffsschrift. You probably knew that Frege’s life work was demolished when Bertrand Russel presented his infamous paradox in a letter to him, but surprisingly enough it has been found that a rather large portion of it can be salvaged by very small means. The end result is a consistent, second order theory of arithmetics that in some ways is much more elegant than the usual Peano axiomatic formulation (these are instead derived). The fine details are not always that easy to follow, but Boolos’ interesting philosophical remarks makes it worthwhile in the end. Also recommended!

If time permits, I’ll also revisit Computing With Logic by David Maier and David S. Warren. For some reason I left it half unfinished when I read it the last time. It more or less contains everything you need to know about the theory and practice to implement a reasonably efficient Prolog system (excluding parsing), and could potentially serve as inspiration for a few blog entries in a not-so-distant future.