Rude repl

I have added eval and a repl (read eval print loop) to Church.

> + 3 4
> length "foobar"
> eval "* 19 3"
> load "church/test/"
> (main)

Up to now I have only been using Church as a command-line compiler which produces executable files. Yet I have always preferred interactive language environments (lisp, smalltalk, python etc) to stop-compile-run languages.

All the machinery for writing a repl has been in Church for a while, including

  • Church parser available as a library
  • Church compiler available at runtime
  • Machine code generator and dynamic linker available at runtime
  • The ability to modify the dispatch table at runtime

This makes the repl easy to implement:

                        rstr = (read-line)
                        if (null? rstr)
                                return-from repl nil
                                print (eval rstr)

Eval is a little bit more messy because the compiler is designed to compile a whole method at a time and doesn’t know what to do with variables that are not either a global or local variable.

To implement eval I wrapped the eval string in a lambda that I return from a method which gets run after compilation.

   eval-compiled-function = (fn -- eval_str) 

This works for certain language expressions, but does not presently provide the ability to assign to “repl” variables.

Another drawback is that multiline statements are currently not possible (unlike python and lisp).

In the future I hope to experiment with a SLIME-like extension to emacs which communicates to Church across a socket, allowing interactive evaluation and compilation of source code from within the editor.


I have ported jonesforth to 64-bit x86 code.

jonesforth is a tutorial-style implementation of forth which explains in detail how the compiler and runtime is implemented. Porting the code to a slightly different assembly language helped me to think carefully about what each primitive does and about how they are used in the runtime code.

As noted in the jonesforth comments, the original advantage of using direct-threaded code on a 16-bit machine is that calling each word can be encoded in two bytes instead of three. That’s a savings of 33%. On 32-bit x86, it’s four bytes versus five, saving 20%. In my 64-bit implementation I chose to extend the 4-byte addresses to 8-byte words. This actually results in wasting space rather than saving it because on x86-64 calls and branches are usually encoded with a 5-byte instruction using relative displacement.

The port was fairly straightforward, I mostly just replaced 32-bit registers (eax, esp, esi etc) with the 64-bit equivalents (rax, rsp, rsi etc) and changed every reference to the word-size from 4 to 8.

The biggest difference is that syscalls use different registers on 64-bit linux and these registers can be clobbered during the call.

You can get the code from the mercurial repository (or browse it here):

hg clone

To compile it:

gcc -m64 -nostdlib -static -Wl,-Ttext,0 -Wl,--build-id=none -o jonesforth64 jonesforth64.S

To run it:

cat jonesforth64.f - | ./jonesforth64
1 2 3 4
4 3 2 1
3 2 4 1

I’ve tested most of the code in the .f file, but I haven’t yet implemented C strings, file-io or the built-in assembler.

I’ve tried to keep the comments intact, but haven’t updated them to reflect different word sizes or registers etc.

ICFP ’09

I had the pleasure of participating in the ICFP contest again this year. I think I joined a haskell team in 2002 (which is funny because I’m not proficient in haskell at all!). But since then I’ve only been part of a Lisp team in 2007. That time I failed to implement a ropes-like library in enough time to get us enough momentum.

So this year it was great to try again, our team ended up being just myself and another Lisp hacker from New Zealand. Since I’m in South Africa there was quite a time zone difference between us and we both stayed up till early hours during the last two days.

As always it didn’t take long to write up the initial code for the interpreter, but we spent several hours tracking down bugs, parsing the inputs and figuring out how to approach the problem.

My partner then took over, he wrote all the physics and modeling code while I tried to provide moral support and starting hacking a visualizer.

Initially we used a lisp library called cgn, which was ok for the initial runs but proved too slow later on. cgn writes out data points to a text file (and we had to patch it to write double-floats correctly) and then feeds this to gnuplot.

The writing and reading of these ascii files was too slow for the kinds of scenarios we were modeling (tens of thousands of data points), so I started over, trimming the dataset by only using every 100th data point and writing data files in binary format.

Since I had never really used gnuplot before I spent several hours reading documentation and poking around until I was able to generate scripts like the following to render our data:

set key bottom right
plot [-100000000:100000000] 'plots/ROCKET.dat' binary record=11951X11951 format="%float64%float64" title 'plots/ROCKET', 'plots/TARGET.dat' binary record=11951X11951 format="%float64%float64" title 'plots/TARGET'
pause -1

I was quite pleased with the results, but there are obviously better ways to display the trace if you have the time and create the right tools.

Satellite visualization

(Note the start and end labels are swapped in this picture)

In the end we never really got a solution for the 4th task, but managed to score about 2000 points with our solutions.

Church release

I’m proud to have reached the stage where my Church-State system can compile itself (ie the compiler is bootstrapped).

I have made the first alpha release available at:

To try it out you’ll need a 32-bit x86 linux system with “ld” installed. (Usually ld will be installed if you’ve installed something like gcc).

There are two simple test files mentioned in the README and there are also instructions for bootstrapping the system.

One thing missing from the release is a compiler that compiles the output from the OMeta parser generator to Church files. That means it’s not possible to change the grammars just yet.

Another incomplete feature is that Church and State macros are hard-coded into the compiler. If you look at and you’ll see the various hard-coded macros (some of which are quite complex). To be able to include these macros in the source files where they are used I need to be able to dynamically compile and load church code. I’ve completed the first step of this process, see and for working code that can compile a church file down to native code, allocate memory for it and link it into the running image.

Once I have Church macros working, I plan to rewrite a lot of to use macros instead of functions for emitting machine instructions. I think that this will dramatically improve compilation times. While preparing for this release I did a lot of work on performance, even removing array bounds checking and some other safety checks to make it faster. Currently the system bootstraps in 90 seconds on my laptop, but my goal is to be 2 or 3 times as fast.

Further performance improvements

Since my last post on performance improvements I have achieved more speedups.

My primary test case is running an OMeta parser against the runtime file. A critical factor in the performance of this type of parser is the memoization of previous parse results. (When the parser backtracks it may apply the same rule for the same input several times, if the results are memoized they can be reused instead of being recomputed).

Previously I had implemented a high-level hashtable in Church for the memo table. Due to the overhead of dynamic dispatch in the hashing and array access, this was quite slow. I replaced it with a low-level hashtable from kazlib. As a rule I have tried to minimize external dependencies for this project, but at this point I would rather reuse this external library than rewrite it in State.

Next I implemented inline caching for dynamic dispatch. By storing the argument types and the code pointer for the previous call in the “code vector” associated with a function, it’s possible to avoid the expensive lookup operation most of the time.

Lastly I also implemented inline caching for slot lookup. Slot lookup basically calculates the offset of the field within an object by counting all the slots in the class hierarchy. We cache this offset based on the argument type and the slot name.

All these changes yield a three-fold performance improvement. The test case now runs in 1 second versus 0.5 seconds for the Lisp implementation. The instruction read count is down from 13 billion before these changes to 2.7 billion.

This is a screenshot of the latest callgrind output. From this call graph for “church-make-closure” we can see that “church-alloc-object” is a candidate for optimization.

KCachegrind output for the callgrind tool

New calling convention for State

I have implemented a new calling convention for State. Functions now take:

<argument count> <closure pointer> <arg1> <arg2> ...

There are two special forms (load-argument-count) and (load-closure-pointer) for accessing these hidden arguments.

Adding an argument count allows implementation of inline caches (see next post) and a “rest parameter” which collects all extra parameters into a list

To implement the rest behaviour in the Church compiler I modified the church grammar to tag a rest parameter as a “:rest-var”

main args
     foo 1 2 3 4 5

foo a *rest
    print a
    print rest

this will print

[2 3 4 5]

and then arrange for the code to call out to some runtime code to construct the list:

(define |church-setup-rest-var| (lambda (rest-var-pointer arg-count fixed-param-count)
                                  (let ((rest-temp TAG_NIL)
                                        (remaining-temp (- arg-count fixed-param-count))
                                        (offset-temp 0))
;                                      (call-c-extern |printf| "remaining-temp %lu                                                                                       
;" remaining-temp)                                                                                                                                                       
                                       (if (= remaining-temp 0)
                                            (set! (deref rest-var-pointer) (|church-reverse!| rest-temp))
                                            (go end)))
                                       (push (deref (+ rest-var-pointer offset-temp)) rest-temp)
                                       (set! offset-temp (+ offset-temp 4))
                                       (set! remaining-temp (- remaining-temp 1))
                                       (go check)

Improving dispatch performance in Church

Over the last week I have been working on performance optimizations for Church code. My basic test case is to run the OMeta parser (implemented in Church) on the “” file in the Church runtime library. This file implements most of the Church runtime code. Parsing this file with the Common Lisp implementation of OMeta takes 0.5 seconds on my laptop. Initial runs of the Church code took well over a minute.

The first major change was to cache symbol lookups. Previously I was storing all symbol objects in an a-list (a linked list) and every time a symbol was used the table would be searched linearly using string comparisons. To avoid this lookup I modified the State compiler to take a (load-constant-value <raw-value>) form. When the state compiler compiles this form it creates a cell in the data segment from which to load a constant value. It also generates some initialization code which will call out to the runtime to initialize the constant.

By collecting all these initialization sequences and running them on startup we can intern all the symbols used by the program before it starts running. Each symbol reference thereafter is a simple memory load.

Using these cached symbols makes class lookup a lot easier too. Previously I stored all classes in a table keyed by symbol and searched it linearly when looking up classes. In the new implementation I store class objects in a “class” slot associated with the symbol representing the class’s name.

Similarly, the dispatch table was stored as a global list and searched first by symbol (selector) and then compared according to argument types. I moved the dispatch rules to the symbol objects, making the search much shorter.

These changes brought the instruction execution count down to about 40 billion instructions for Church (versus about 650 million for Lisp).

Further optimizations involved removing all cons’ing from the dynamic dispatch routines, inlining as many of those calls as possible to avoid function call overhead and rewriting recursive routines as tagbody loops.

I also reordered the tests in the “class-of” code to check the most common cases first.

All these optimizations brought the time down to about 4.5 seconds and the execution count to 13 billion. This is still about a factor of 10 slower than the lisp implementation. Eventually I hope to bring it to about a factor of 2, possible future optimizations are inline-caches for method lookup and optimized cons and closure allocation. Eventually I might also look at more sophisticated approaches, such as type inference, an optimizing compiler pass and runtime profiling.

To profile the code I tried two tools, Intel’s VTune and the callgrind module of valgrind. VTune was quite disappointing, besides having to paste serial numbers into an obscure installation utility (which failed the first few times) the download was 500 megabytes. After installation the sampling driver failed to work but I managed to run the call tracing module.

valgrind provides similar information to VTune, but the kcachegrind visualizer is much better, the call graph is very easy to work with and is also possible to see hot loops at the assembly level.

Bootstrapping Church

As part of my work so far on the Church/State language, I have implemented a parser generator, two compilers, a machine code generator and a library to write ELF files. All of these were implemented in Common Lisp. In addition to this I have implemented runtime code for State (100 LOC) and for Church (about 2000 LOC).

The long term goal for this project is to bootstrap, achieved by rewriting all the components mentioned above in Church itself such that the system can eventually compile itself without relying on an external system such as Common Lisp or a C compiler.

So far I have ported OMeta (with the new JS style syntax) to Church and written a Church grammar that produces Church expressions as actions. I have also completed the basic framework of the code generator (enough to assemble code that adds two constant integers together).

Here is an outline of the remaining work:

  • Port Church compiler – the tricky party here is implementing macros. Currently I use ‘eval’ to compile macro bodies in lisp, to do this in church requires dynamic compilation of church code too.
  • Port State compiler – This includes writing a sexp parser (a lisp reader). Hopefully this will be fairly easy to do with the current OMeta implementation. I will probably also have to improve the quasiquote and splice operators in Church to implement this. Currently State macros are implemented in Lisp, I think the best approach will be to try rewrite them in State
  • Port the rest of code generator – Should be a fairly straightforward matter of implementing the remaining AST instructions and machine instructions
  • Port the ELF library – Since I used the binary-types library in the Lisp implementation, I’m not sure what approach will be necessary to do this in Church. I might end up implementing some kind of MOP system which allows special metaclasses and special slot attributes which can be used to control the binary output of class instances

Along the way I’m sure I will have to add more runtime functionality, and will probably also have to create an operator similar to destructuring-bind for deconstructing lists.

I hope to be able to avoid adding dynamically scoped variables, multiple value return and a “rest” parameter, all of which were used in the Lisp implementation.

First-class closures

I have implemented first-class closures in my experimental language Church/State. Until recently I was using partial closures (also called “downward funargs”) because I thought full closures would not be necessary to bootstrap the system. I found, however, that my implementation of OMeta in Church required full closures. Unlike partial closures which were stack allocated and could not escape the dynamic extent of the call that created them, full closures are heap allocated.

One of the consequences of this is that there is no longer such a clear separation between Church and State. (I named these languages as I did because I thought I could maintain a clear division between them). Previously State was supposed to implement a “static” language with very little run-time features. Now, however, because closure creation requires heap allocation, the State compiler has to call out to an allocation routine and create a runtime structure called a “closure-record”.

Since I wanted closure objects to look like other objects in Church, each closure record has a class-pointer and slots that are compatible with the Church object system. To implement this the State compiler essentially has to call out to Church runtime code, which makes the two systems mutually dependent, instead of having only Church rely on State.

Below is the Church code declaring the closure related classes. Each closure-record has a parent pointer to create a chain a closure-records. These closure-records are passed as a “hidden” argument to all State functions, meaning that the State calling convention is no longer compatible with the C calling convention. To call C routines from State, I have provided the “call-c” operator.

class closure-record

class closure

Church and State (part 2)

While State is a low level language that compiles directly to machine code, Church has features common to many high level languages (and some uncommon ones):


  • A block-indented syntax (similar to Python or Haskell)
  • Macros
    syntax new [(&rest args) (let ((class-name (second (first args)))
                                   (key-args (loop for (_key k v) in (cdr args) collect `(:symbol ,k) collect v)))
                                `(:let (((:var "o") (:inline state::state-make-object (:symbol ,class-name) ,@key-args)))
                                    (:apply "init" (:var "o"))
                                    (:var "o"))))]
  • A class system
  • Dynamic dispatch (dispatch depends on all the arguments of a function, also known as multiple-dispatch)
  • Pattern matching syntax like Erlang or Prolog (yet to be implemented)
  • A high level ‘loop’ control structure
    bar list1 list2
                    for x in list1
                    for y in list2
                    when x
                    do (print x)
                    when y
                    do (print y)
                    collect y

At the moment I have a fairly primitive implementation for dynamic dispatch, I use a cons list to store lists of patterns and bodies, sorted first by symbol and then by argument types.

Symbols are also interned into a cons list, later I will have to find a way to include them directly in compiled code.

Loops are expanded into State’s tagbody forms.

The parser is implemented in my Common Lisp version of OMeta, which I plan to port to Church later.

My future work is focussed on writing the Church and State compilers in Church itself, thus bootstrapping the language.