Debian netboot install over PXE

Installing Debian on an ultraportable machine (no FDD, no CD) is fairly easy, but sometimes I get stuck.

To setup the PXE host I installed dhcp3-server and tftpd-hpa.

I used the netboot images at

copying them to /var/lib/tftpboot and unpacking netboot.tar.gz.

I configured /etc/dhcp3/dhcpd.conf for the target machine, using its MAC address. Don’t forget the ‘next-server’ option.

subnet netmask {


host cmalu {
 hardware ethernet xa:xa:xa:xa:xa:xa ;
 filename "pxelinux.0";
 server-name "name";
 option routers;
 option domain-name-servers;

I then start tftpd like this:

 /usr/sbin/in.tftpd -v -v -v -l -s /var/lib/tftpboot

it logs to /var/log/daemon.log

The debian installer was looking for pxelinux.cfg/01-xa:xa:xa:xa:xa:xa so I copied pxelinux.cfg/default to that file.

Once the installer starts things are fairly simple.

OMeta in Common Lisp

OMeta is a parsing engine by Ian Piumarta and Alessandro Warth. It combines PEG rules with a syntax for naming matched components and executable actions that can refer to the named parts. A simple OMeta rule looks like this:

ostring ::=     $' (~$' <ochar>)*:s $' => [(coerce  s 'string)]

Here a string is matched starting with a single quote, followed by any character that is not a single quote and ending with a single quote.

The action for this rule turns the list of characters matched into a lisp string.

I created an OMeta implementation in Common Lisp by bootstrapping from the squeak implementation of OMeta.

To do this I first modified the OMeta parser to be able to read lisp forms in the action bodies. Next I modified the OMeta compiler to produce lisp forms instead of smalltalk code.

Using these generated forms in combination with hand-coded primitive rules (ometa-prim.lisp) I was able to use two new grammars ometa-parser.g and ometa-compiler.g to fully bootstrap the system.

My code is available from this mercurial repository:

hg clone

To run it:

(load "load.lisp")

Then you can parse a grammar file into its AST:

(parser-grammar-file "grammar.g")

To create an executable parser, first declare a subclass of ometa:

(defclass dnaparser (ometa) ())

Next write the production rules in a grammar file:

base ::=
$A | $T | $G | $C^L
dsequence ::=
<base>*:s => [ (coerce s 'string) ]^L

and generate the parser from the grammar:

(generate-parser 'dnaparser "example.g" "example.lisp")

This reads the grammar file “example.g” and produces lisp defmethods for the class ‘dnaparser’. These lisp forms are written to “example.lisp”.

After loading the parser, you can run the productions like this:

CL-USER> (let ((*ometa-class-name* 'dnaparser))
(run-production 'dsequence (coerce "GGCCGGGC" 'list)))

As an alternative to generate-parser, you can use install-grammar to load the rules without generating an intermediate file.

The squeak implementation of OMeta supports several extensions that I have not implemented:

– Memoization of previous parse results
– Support for left-recursive rules
– Ability to apply a rule in the super class
– Support for rules that call out to a “foreign” parser during the parse
– An optimizer to remove redundant AND and OR forms produced by the parser

The Farmer, Fox, Chicken and Grain problem

I haven’t had much opportunity to work with logic programming languages, so I attempted this simple planning problem in Prolog:

A farmer is returning to his farm after a long day of working in the fields. He has with him a fox, a chicken, and some grain. He must cross a small stream on his way back to the barn. At the stream, there is a canoe, in which he can transport at most one item across at a time. However, he cannot leave the fox alone with the chicken, or the fox will eat the chicken. Similarly, he cannot leave the chicken alone with the grain because the chicken will eat the grain. Devise a plan (sequence of actions) that the farmer can take to safely bring all of his possessions across the stream and continue on his way home.

I wrote the following solution using gprolog. Both gprolog and the other prolog available on Debian systems, swi-prolog, have really horrible programming environments. I can see why many people would skip over prolog given such poor implementations.

initial(X) :- sort([farmer,fox,chicken,grain],X).

% exclude invalid combinations

check_invalid([chicken, fox]) :- !, fail.
check_invalid([chicken, grain]) :- !, fail.
check_invalid([chicken, fox, grain]) :- !,fail.
check_invalid(_) :- true.

% check if we have reached the goal

find_solution(Goal,Goal,_, Moves,Plan) :-

% try to make a move, check whether the result is valid
% and check the history to see if we are repeating a move

find_solution(Initial,Goal,History, Moves, Plan) :-
    find_solution([SNear,SFar], Goal, [[SNear,SFar] | History], [ Move | Moves], Plan).

% cross from Near to Far side
cross([Near,Far],Result,Move) :-
    member(farmer, Near),
    cross1([Near,Far],Result, Move, forward).

% cross from Far to Near side
cross([Near,Far],[NN,NF],Move) :-
    member(farmer, Far),
    cross1([Far,Near],[NF,NN], Move, backward).

% let the farmer cross alone
cross1([Near,Far],[NN,NF],Move,Direction) :-
    append([farmer],Far, NF),
    Move = [Direction, farmer, []].

% let the farmer cross with one Animal/grain
cross1([Near,Far],[NN,NF],Move,Direction) :-
    append([farmer,Animal],Far, NF),
    Move = [Direction, farmer, [Animal]].

run(Plan) :-


I was inspired by the metacircular parser written by DV Schorre and Ian Piumarta’s related work in his pepsi/coke project to write a parser generator that can generate itself.

I bootstrapped the parser by using cl-peg to generate lisp code that could eventually generate itself.

I have made several versions of metapeg available, the first two are simple lisp and scheme implementations. The later two are more complex and allow the parser to walk back up the parse tree during the parse to examine the text matched by previous parse nodes. This functionality allows easy implementation of a @tag construct used to implement the indentation rules in languages like haskell, python and yaml (as suggested here) .

As an example I wrote a simple parser for yaml sequences. Here is some sample input:

 - 'foo'
 - 'bar'
 - 'yah'
  - 'a'
  - 'b'
  - 'c'

This is the grammar:

program <- seq-element+ 

inset <- @inset ws+
ws <- [ \\t]
nl <- [\\n]
ws_or_nl <- ws/nl

seq-element <- "-" ws* string nl { (third data) } / nested-sequence
nested-sequence <- "-" ws* nl inset seq-element (@inset seq-element)* { 
(cons (fifth data) (zip-second (sixth data))) }

string <- "'" [ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz]+ "'" { 
(char-list-to-string (second data)) }

and the output:

CL-USER> (value (parse "../examples/nested_sequence.yaml" "yaml.lisp"))
(("FOO" "BAR" "YAH" ("A" "B" "C")))

The parsers generated by metapeg do not implement memoization of the parse results, possibly making them unsuitable for large grammars or large inputs.


I have written Common Lisp, Erlang and Javascript versions of a library I call SLURP. It’s based on the ideas used in the Smalltalk SRP library.

Right now it’s pretty rough, but useable enough to transfer simple objects with named slots (eg CLOS objects, Javascript objects). On Erlang I’m using some very ugly code to get it to work with Erlang records (which are a compile-time phenomena).

It currently supports some basic types such as bytes, unsigned integers, strings, lisp-style cons cells and arrays of the basic types.

In the future I might look at performance optimization and writing a formal definition (machine checkable) for the protocol.

The Good

Encoding native lisp objects is as simple as calling slurp:encode

(defmethod send-state-change (object-id state-change)
  (let* ((encodedState (slurp:encode state-change))

The Bad

In Javascript, we have to add additional properties to a prototype to tell SLURP how to encode it:

function FuraxSubscriptionRequest(objectId, clientId) {
        this['OBJECT-ID'] = objectId;
        this['CLIENT-ID'] = clientId;

FuraxSubscriptionRequest.prototype['SLURP_TYPE'] = "furax-subscription-request";
FuraxSubscriptionRequest.prototype['SLURP_PACKAGE'] = "FURAX";

The Ugly

In Erlang we have to grab the record descriptions at compile time and save them in a dictionary for later reflection

The source is available by getting:

hg clone

SRP (State Replication Protocol)

While investigating squeak Smalltalk I discovered SRP, a serialization library.

What I really like about SRP:

  • Compact encoding – Unlike XML markup there are no redundant closing tags and meta information (like the tag name) only has to be recorded once
  • Preservation of the object graph – It can deal with cycles in the object graph and will make sure an object is only written to the output stream once
  • Tolerance of schema/class changes – In languages with powerful reflection mechanisms (like Smalltalk) it’s possible to reconstruct the stored object even when the original class structure has changed or been removed. Since field access in Smalltalk is just a message send, the library can build an object that responds to these messages even if it is not an instance of the original class used when storing the object.

As an exercise I did a quick port of SRP to the slate environment.

I need a mechanism similar to SRP or JSON to transfer objects between Common Lisp, Erlang and Javascript for my Guli project.