Archive | Ramblings RSS for this section

Arcane Abuses of append

First I should point out that the following predicates hardly qualify as arcane, and they’re not really that abusive either. But they do use append, and one out of three isn’t so bad after all? append/3 is one of Prolog’s most useful predicates and is often one of the list predicates first taught to students. Beginners, and especially those familiar with other programming languages, sometimes have a hard time recognizing the multiple usages of the predicate however. Just for reference and to make sure that we’re on the same page, the usual definition goes like this:


   append([], Ys, Ys).
   append([X|Xs], Ys, [X|Zs]) :-
        append(Xs, Ys, Zs).

Nothing fanciful. Just a standard recursive predicate which holds if Zs is the list obtained when appending all the elements of Xs with all the elements of Ys. So when should we use this predicate? When we want to append two lists? No! Upon years of using Prolog I don’t think I’ve used append/3 for this purpose in a serious program even once. The reason being that difference lists are usually a much better choice in these instances since they can be appended in constant instead of linear time. So let’s try to figure out some other usages.

member(X, Xs) is true if X is a member of the list Xs. It’s of course not hard to write this as a recursive predicate as we did with append/3, but why bother if there’s an easier way? So let’s solve it with append/3 instead. Upon a first inspection it might not look like they have anything to do with each other. How can we find an element in a list by appending two lists? The answer is actually pretty simple. We know that we take a list, Xs, as argument. Can we find two other lists such that they give Xs when appended? Of course. Just call append/3 with Xs as the third argument. Remember that append/3 is a relation and not a function:

?- Xs = [a,b,c], append(A, B, Xs).
Xs = [a, b, c],
A = [],
B = [a, b, c] n
n
Xs = [a, b, c],
A = [a],
B = [b, c] n
n
Xs = [a, b, c],
A = [a, b],
B = [c] n
n
Xs = A, A = [a, b, c],
B = [] n
n
false.

That was the first step. Now let’s find an interpretation of membership that can be cast in terms of these three lists. How about this: X is a member of Xs if Xs can be divided into two parts, A and B, such that X comes between A and B. Put into code this is:

    member(X, Xs) :-
        append(_A, [X|_B], Xs).

Very easy once you know the trick, but difficult if one is afraid of using append/3 as a relation instead of a function. A similar problem is the sublist problem: given a list Xs, is Ys a sublist of Xs? Again it’s not hard to imagine how a recursive version would look, but perhaps we can find an easier solution with the help of append/3. A sublist is a continuous subsequence. This can be expressed in terms of three lists: Ys is sublist of Xs if there exists two lists, A and B, such that A appended with Ys and B results in Xs. That was quite a mouthful, but in essence it’s the same thing as we did with member/2 with the difference being that we’re looking for a list instead of a single element. Assume that we had the predicate append/4. Then sublist could be solved as:

sublist(Xs, Ys) :-
    append(_A, Ys, _B, Xs).

Alas, since we don’t have such a predicate we’re going to use append/3 two times instead. First Xs is divided into A and B. Then we find the sublist Ys by saying that Ys is a suffix of B.

    sublist(Xs, Ys) :-
        append(_A, B, Xs),
        append(_, Ys, B).

It should be noted that this solution gives rise to many duplicate answers. Why? Assume that Xs = [a,b]. Then the answer Ys = [b] can be found by first binding B to [a,b] and then Ys to the prefix [b] of this list. Or it can be found by binding B to [b] and then binding Ys to the prefix [b] of B. This is a bummer since we’re only interested in one of these answers. The implementation of an optimized version is left as an exercise to the reader.

select/3, last/2 and other basic list processing predicates can be implemented in essentially the same manner. As a last example we’re going to implement nth/3 with append/3 and length/2. nth(X, Xs, N) is true if X is the N:th member of Xs, starting from 0. One observation is enough to give us a solution: X is the N:th element of Xs if the number of elements preceding Xs is equal to N. This is easy to check with length/2:

    nth(X, Xs, N) :-
        append(A, [X|_], Xs),
        length(A, N).

A question to the observant reader: why is the order of the two goals in the body not swapped? Also, as a concluding remark: I’ve been told that it’s not always a good idea to do something just because you can. That might well be true. This version of nth/3 is rather inefficient and I would not recommend anyone to try it at home!

Advertisements

Prologomenon Tackles Logical Empiricism

Back, at last! This time we’ll have a look on a movement close to our hearts: logical empiricism. The subject is not really related to logic programming at all, but I did this text as an assignment in a Philosophy of Science course and figured that I might as well post it here in the case that someone finds it enjoyable.

Disclaimer one: It’s not particularly well-written and is probably full of errors and misunderstandings.
Disclaimer two: I did an automatic LaTeX to HTML conversion that apparently couldn’t handle footnotes and references. Since I’m a lazy and unmoral person I simply left them as is.

1. Introduction

Science is concerned with theories that make claims about the physical world. Hence it should not come as a great surprise that there exists a relation between facts and laws. Can universal laws be logically derived from observational facts? The explication of this relation is not a problem of science, but rather a philosophical one that has plagued philosophers for centuries, and is surprisingly difficult to answer without retorting to circular or ad hoc arguments. This text will give a brief introduction to two important movements in the history of the philosophy of science: empiricism\footnote{Empiricism is sometimes grouped together with positivism and inductivism.} and logical empiricism. The focus will be on one of the most important principles in the logicism movement: the distinction between analytic and synthetic sentences.

2. Empiricism

Empiricism takes a radical stance in the question of how science is related to facts, and asserts the importance and infallibility of sensory experience and observation. The ideal view is that the human mind is a blank tablet, a tabula rasa, and that knowledge can only be gained from experience. Hence, a scientist should primarily conduct research by observing the world and derive general conclusions from the observations. This methodology is called induction. \footnote{Not to be confused with the concept of mathematical induction.} The logical form is:

{property(a_0) \wedge property(a_1) \wedge ... \wedge property(a_n) \Rightarrow \forall x:\, property(x)}

Where we assume that {x} is restricted to a suitable domain. Let us say that we are interested in the question of whether all copper conducts electricity, and have successfully verified this property in {n} experiments. An application of the induction principle would then have the form:

{conducts\_electricity(copper\_sample_1)\, \wedge }

{conducts\_electricity(copper\_sample_2)\, \wedge }

{.}

{.}

{.}

{\wedge \, conducts\_electricity(copper\_sample_n)}

{\Rightarrow \forall x \in Copper:\, conducts\_electricity(x)}

Both the premises and the conclusion are true, yet the reasoning is not logically valid. Any finite number of affirmative observations cannot prove the necessity of the general hypothesis. This is known as the problem of induction and was elucidated by David Hume in the eighteenth century. At first it might appear to be a pedantic and irrelevant objection since scientists rarely claim to work with absolute knowledge. If the induction principle allows us to derive more and more probable hypotheses then that should be good enough. Here is the catch: for any set of observed facts there exists an infinite number of incongruent hypotheses. Hence we cannot claim that our particular hypothesis is probable in the usual mathematical sense of the word. This should of course not be taken as an indication that inductive testing is void of meaning, but rather that it is inherently difficult to assign a truth value to a scientific theory (that has not been falsified).

This is not the only objection to empiricism. Among other things it can be argued that too much weight is placed on sensory experience, which can hardly be considered to be realistic since almost all observations are dependent on an underlying theory. Furthermore, why should one demand that theories have to be derived by induction when the principle is not logically valid?

Further critique can be found in e.g. Chalmers \cite{Chalmers}.

3. Logical Empiricism

Logic made great strides in the late eighteenth century and the beginning of the nineteenth century. The German mathematician and philosopher Gottlob Frege developed the first formal system capable of higher order logic in the outstanding, but also rather impervious, Begriffsschrift \cite{Frege}. Frege’s aim was to show that mathematics, starting with number theory, did not depend on empirical experience but was true a priory and could be inferred from pure logic alone. That is, it is analytic instead of synthetic. The sentence There exists an infinite number of primes is to Frege an analytic proposition that is true only in virtue of its constituents. At the same time the sentence It will rain tomorrow is synthetic since there is no logical reason why it has to be true or false — it can be confirmed or falsified by observations.

We have learned that pure, strict empiricism fell short for a number of reasons. Does this mean that there is no difference between science and e.g. religion? Intuitively one would like to claim that the most important difference is that the domains of discourse are different. In the first case claims are made about the physical world. In the second case the claims are, at least partly, metaphysical in nature. If one adheres to this demarcation principle then it is clear that it is pivotal to filter out statements that are metaphysical in a scientific theory. This was one of the aims set forth by Alfred Jules Ayer in Language, Truth and Logic \cite{Ayer}, which later became the epitome of logical empiricism \cite{Stanford}. With the advances made in formal logic, it was believed that it would be possible to define a verification principle for metaphysical propositions. Just like Frege, Ayer made a distinction between analytic and synthetic propositions. For a verification principle to be successful, then, it needs to be capable of ascertaining whether a synthetic proposition is empirical. Any sentence that failed the verification principle would be labeled as metaphysical and was asserted to be absolutely void of any meaningful content. This is a harsh claim, and it is perhaps not too surprising that the definition of a verification principle is irksome and prone to errors. Ayer gave the following criterion \cite{Stanford}:

A (synthetic) sentence {A} has empiric content if either:

  • {A \Rightarrow O}, where {O} is an observational sentence.
  • There exists a {B} such that {A \wedge B \Rightarrow O}, where {O} is an observational sentence such that {B \nRightarrow O}.

The second criterion means that {A} can be used with another sentence, {B}, to derive an observational sentence that could not have been derived from {B} alone.

It can be argued that this principle is at least necessary, but is it sufficient? Not at all. Consider any sentence {A} and an observational sentence {O}. If it is possible to find a sentence {B} such that {A \wedge B} necessarily implies {O}, the principle fails. This is simple: take {B = A \Rightarrow O}. Then it can be shown by an application of modus ponens that {A \wedge B \Rightarrow O}.

Even though this simplistic attempt failed, it is not impossible that a more complicated version could do the trick. Of course, this hinges on the fact that there is a distinction between synthetic and analytic propositions. If it can be argued that no such distinction can be made, then we would simply have to abandon the idea behind Ayer’s verification principle. This was the approach taken by Willard Van Orman Quine in his seminal paper Two Dogmas of Empiricism \cite{Quine} \footnote{Where the second “dogma” is reductionism}. Quine argues that the current notion of analyticity is circular in the sense that it relies on another concept, synonymy, that cannot be explained without referring to analyticity. The argument is best illustrated with examples. One of the hallmark sentences in the analytic tradition is No unmarried man is married, which is true no matter what “man” and “married” refers to if we agree that the logical syntax remains fixed. Quine then puts forth a second category of analytic sentences that can be transformed into logically true — analytic — sentences by replacing synonyms with synonyms. For example: No bachelor is married. We have to agree that this sentence is not logically true, at least not in the same sense as the preceding sentence, but could be taken as analytic once bachelor is replaced with unmarried man.

But when are two words synonyms? A dictionary is of no help here since it only records what is already known; there is no explanation behind the stated facts. The reasoning that two words are synonyms if they are interchangeable in all contexts fares no better, since this does not apply to e.g. “bachelor” and “unmarried man” if one puts them in the context “… has fewer than 10 words”. Let us however for the sake of the argument ignore syntactic differences between words. Is this kind of interchangeability good enough to describe synonymy?

Assume that “bachelor” and “unmarried man” are interchangeable in the sense just described. Then by the statement Necessarily all and only bachelors are bachelors we can conclude that Necessarily all and only bachelors are unmarried men by substituting “bachelor” for “unmarried man”. But to claim this last statement is to claim that the statement All and only bachelors are unmarried men is analytic! Why? Because if a statement is necessarily true then it is analytic, and we are back where we started since we need the concept of analyticity to explain the concept of synonymy.

If one agrees with Quine’s reasoning then no sharp distinction can be made between synthetic and analytic propositions, which would leave most of the original logicism program in shambles.

4. Discussion

As far as I can tell there is obviously no logical flaw in Quine’s argument. I am however a bit skeptic to the fact that he is so quick to dismiss synonyms as definitions. Does it really matter why two words are synonyms? I see no inherent problem in simply accepting the fact that some words have evolved to mean the same thing. This would however have the consequence that it is impossible to say that a sentence is analytic without assuming a specific, underlying theory. But is this a problem? Most mathematicians would argue that the sentence {1 + 1 = 2} is analytic, but this is only true if we assume the usual definition of {1}, {+}, {=} and {2}; and would not be true if we interpreted {+} as another relation. A better example can be found in geometry: is the parallel postulate analytic? Yes, if we are working in Euclidian geometry. No, if we are working in a non-Euclidean geometry.

Hence I still think that we can make a distinction between synthetic and analytic propositions, but only with respect to an underlying theory; though I would suspect that this notion of analyticity is not strong enough to be of any use in the logicism program.


References

\bibitem{Chalmers} Alan Chalmers What Is This Thing Called Science. Hackett Pub Co, 1999.

\bibitem{Frege} Michael Potter and Tom Ricketts The Cambridge Companion to Frege. Cambridge University Press, 2010.

\bibitem{Ayer} Alfred Jules Ayer Language, Truth and Logic. Dover Publications; 2nd edition, 1952.

\bibitem{Stanford} Richard Creath Logical Empiricism. The Stanford Encyclopedia of Philosophy, Summer 2011 Edition.

\bibitem{Quine} Willard Van Orman Quine Two Dogmas of Empiricism. From a Logical Point of View, 1961.

Mullets, Moustaches and MicroPROLOG

There was more to the eighties than funny looking moustaches, mullets and screaming electrical guitars. I’m of course talking about the home computer revolution: the era when the end-user was more or less expected to have a solid understand of how the hardware and software worked. Some time ago I thought it would be a great idea to deepen my knowledge of this period, and did what any self-respecting man or woman would do in my situation: grow a moustache – ehr, I mean: read a book about it. The book in question is named The Personal Computer Handbook (Swedish: Allt du behöver veta om persondatorer), and is precisely the kind of book that I would normally scoff at. If it was written today that assessment would probably be correct, but it managed to surpass my wildest imaginations. Just look at this image from the second page, and you’ll probably understand why:

That’s the future right there! And the book keeps delivering. Instead of explaining how one uses a word processor it delves deep in technical details about electronics, peripherals, computer architectures, micro code, assembly code and finally high-level languages such as BASIC, LOGO, COBOL, LISP and finally the Prolog dialect microPROLOG. The text goes as follows (disclaimer: this is my translation from the Swedish book. The English version probably differs!):

At Imperial College in London, a team under the direction of Robert Kowalski has worked on a simplified version of PROLOG (called microPROLOG) intended to foster the children’s capability in logical thinking. This capability is not restricted to computers, but can also be applied in other areas such as mathematics, French, history and geographics.    

While LOGO’s Turtle Graphics is based on the computer’s ability to create images and figures, microPROLOG is focused on the computer’s ability to manipulate symbolic expressions. But Turtle Graphics has proved to be a such successful system, that it has been included in some versions of microPROLOG.

Simple PROLOG programs can be built from propositions that contains names of individuals and relations between them.

One first constructs atomic propositions (so-called because they are the simplest possible propositions.) For example: Anders is-neighbor-with Lisa. Lotta is-friend-with Nicke. Lisa is-fighting-with Anders.

One can also construct molecular (composite) propositions, for example: Maria likes Johan if Johan likes Maria and Johan is-nice-to Maria. Or: x is-larger-than y if x is-larger-than z and z is-larger-than y. One can ask questions to the computer that are based on these facts and relations. Writing microPROLOG programs is like writing simplified and logically clear English, and the children becomes excited by the fact that the computer understands a language so close to their own.

MicroPROLOG is just like BASIC an interactive language, where the pupil can add, remove and modify the program and immediately see the result of the modification. Individuals in simple programs can be replaced by variables or lists, for example: Johan has-favourite-groups (Police, Roxy Music, ABBA).

PROLOG can handle such lists by first dividing them into a head and a tail. The head is the first element in the list, and the tail is the rest. The head in Johan’s list is therefore “Police”, and the tail is everything else in the parenthesis. The same head-tail structure is also found in LISP and LOGO.

Recent studies show that education in BASIC programming learn children how they should use the computers of the seventies, while education in LOGO and microPROLOG prepares them for the computers of the eighties and the nineties.

I can’t help but be a little moved by such unbridled enthusiasm! What went wrong? When I went to middle-school they didn’t even teach us a Basic dialect! Perhaps it’s time to revitalize the grand plans of microPROLOG and create a new generation of Prolog hackers?

Finally, I feel obliged to post one final picture from the book. It’s referred to as The computer controlled home, and should speak for itself.

Man, the future is such a letdown!

(For the record, the next installment regarding computer generated music will probably be delayed for a few weeks due to an upcoming examination period.)

Prologomenon Goes Viral

Apparently, social medias like Twitter is all the rage these days. It’s been used to convey critical information in emergency situations, helped revolutions in suppressed governments, and now it’s finally time to overthrow the corrupt imperative languages that have grown fat on the labors of the working class…. Kidding – some of my best friends are imperative languages! I have however started a Twitter account in relation with the blog with the intent to a) troll imperative languages, b) post code-snippets within the character limit that accomplishes something cool. I’ve currently implemented:

All just for fun, of course. Don’t try this at home unless you’re sure you’re able to use cryptic variable/predicate names and abuse all the language features that normally allows one to write readable, concise source code.

Prolog

And with that awful pun, we’re off.  The good news is that it can only get better from here. While I can’t claim to have any grand plans regarding the content of the blog, my aim is to cover both practical and theoretical issues in the realm of logic and logic programming. From this premise, it immediately follows that the title “The blogging of logic programming” probably would have been a more accurate name of the blog, since I’ll probably leave the constricted domain of Prolog quite often. Oh well.

I’ll not make any special assumptions when it comes to the knowledge level of the reader, but I’ll try to keep a didactic and pedagogical approach for the simple reason that I probably wouldn’t understand a post once the coffee-induced frenzy wore off otherwise.