ICFP ’09

I had the pleasure of participating in the ICFP contest again this year. I think I joined a haskell team in 2002 (which is funny because I’m not proficient in haskell at all!). But since then I’ve only been part of a Lisp team in 2007. That time I failed to implement a ropes-like library in enough time to get us enough momentum.

So this year it was great to try again, our team ended up being just myself and another Lisp hacker from New Zealand. Since I’m in South Africa there was quite a time zone difference between us and we both stayed up till early hours during the last two days.

As always it didn’t take long to write up the initial code for the interpreter, but we spent several hours tracking down bugs, parsing the inputs and figuring out how to approach the problem.

My partner then took over, he wrote all the physics and modeling code while I tried to provide moral support and starting hacking a visualizer.

Initially we used a lisp library called cgn, which was ok for the initial runs but proved too slow later on. cgn writes out data points to a text file (and we had to patch it to write double-floats correctly) and then feeds this to gnuplot.

The writing and reading of these ascii files was too slow for the kinds of scenarios we were modeling (tens of thousands of data points), so I started over, trimming the dataset by only using every 100th data point and writing data files in binary format.

Since I had never really used gnuplot before I spent several hours reading documentation and poking around until I was able to generate scripts like the following to render our data:


set key bottom right
plot [-100000000:100000000] 'plots/ROCKET.dat' binary record=11951X11951 format="%float64%float64" title 'plots/ROCKET', 'plots/TARGET.dat' binary record=11951X11951 format="%float64%float64" title 'plots/TARGET'
pause -1

I was quite pleased with the results, but there are obviously better ways to display the trace if you have the time and create the right tools.

Satellite visualization

(Note the start and end labels are swapped in this picture)

In the end we never really got a solution for the 4th task, but managed to score about 2000 points with our solutions.

Church alpha 3

This release fixes some bugs:

http://subvert-the-dominant-paradigm.net/~jewel/church/church-alpha-3.tar.gz

  • When I first wrote some of the church compiler passes and the sexp parser I used global variables to store state specific to that pass. I knew that only one thread would be running and that these passes didn’t have to be reentrant. Later I added functionality to compile macros on the fly and to compile dispatch matchers. When I did this I tried to rewrite all the modules that used global state, but I missed the sexp parser and this caused some strange bugs later on.
  • When allocating memory for dynamically allocated code, I was storing the address of the memory block in a fixnum. This worked fine until malloc started returning addresses from higher memory which used the most significant two bits in a word. These would get shifted out when tagging a fixnum and yield an invalid address. To work around this I now box the address by storing it in an array

Compiling dispatch matchers at runtime

After implementing inline slot caches I tried various approaches for caching method lookup. At first I thought I could store a global hash table which was keyed by combining the hash values of the types of the arguments passed to a function. Then by comparing the arguments types with the types stored in the table entries I could find the correct method to dispatch to. This turned out to be slower than my current implementation which simply walks the cons lists that describe each method’s type pattern and calling “type?” to test whether the argument is a subtype of the target type.

My eventual solution was to turn this lookup process into straight-line code by dynamically creating machine code to execute these tests.

Consider the following Church code:


length s:string
   ......

length n:nil
   0
                                                                                                                                                         
length l:cons
   ......                                                                                                                                                 
                                                                                                                                                                        
length a:array
   ......


Here we check the type of the argument to distinguish between nil, strings, cons-lists and arrays. For nil and cons types we can generate an efficient check against the object tag, for strings and arrays we call out to “type?”. The following state code is generated for these tests:


 (DEFINE DISPATCH-MATCHER31
  (LAMBDA (ARG1 ARG2 ARG3 ARG4 ARG5 ARG6 ARG7)
    (LET ((ARG-COUNT (LOAD-ARGUMENT-COUNT)))
      (IF (= ARG-COUNT 1)
          (BEGIN
           (IF
            (AND (= (BAND ARG1 LOWTAG_BITS) TAG_REF)
                 (CHURCH-IF (TYPE? ARG1 137075987) 1 0))
            (RETURN (CALL-C 134993347 1 (LOAD-CLOSURE-POINTER) ARG1)))
           (IF
            (AND (= (BAND ARG1 LOWTAG_BITS) TAG_REF)
                 (CHURCH-IF (TYPE? ARG1 137075507) 1 0))
            (RETURN (CALL-C 134968518 1 (LOAD-CLOSURE-POINTER) ARG1)))
           (IF (= (BAND ARG1 LOWTAG_BITS) TAG_CONS)
               (RETURN (CALL-C 134968094 1 (LOAD-CLOSURE-POINTER) ARG1)))
           (IF (= ARG1 TAG_NIL)
               (RETURN (CALL-C 134968046 1 (LOAD-CLOSURE-POINTER) ARG1))))))
    (CHURCH-DISPATCH-MATCHER-FAILED 1))))

The first two tests check for the TAG_REF tag which is used for normal objects. If the tag matches we call “type?” with the literal address representing the class of the target type. If the test matches we can call the associated method directly.

The other two tests only compare the tag, making it quite efficient for primitive types like cons, fixnums, true and nil.

After generating this code the calling function is patched to jump to this matcher method directly.

Together with the inline slot caches these modifications speed the system up by about a factor of two.

Future work involves determining more precisely when to compile these dispatchers (at the moment I do it after a symbol has been dispatched through 5000 times) and optimizing the generated tests. (Even in this example there is redundant check for TAG_REF). It is probably also possible to be more efficient when overlapping types are involved.

Inline slot caches

As part of my performance work I have implemented inline slot caches for slot reads and writes in Church. These are implemented by patching the program code at runtime with assembly code that checks the type of the target object and loads the correct offset for slot access.

In this example we see the code that prepares a call to ‘church-fixup-initial-slot-access’. The first two arguments on the stack (%esp) and 0x4(%esp) are the argument-count and closure pointer used in the State calling convention. The next two arguments 0x8(%esp) and 0xc(%esp) are the object being accessed and the symbol representing the name of the slot to be accessed.


0x080ce733 mov    %ebx,0xc(%esp)
0x080ce737 mov    %edx,0x8(%esp)
0x080ce73b mov    %ecx,0x4(%esp)
0x080ce73f mov    %eax,(%esp)
0x080ce742 call   0x80ab3b4 
0x080ce747 mov    %eax,%eax
0x080ce749 nop    
0x080ce74a nop    
0x080ce74b nop    
0x080ce74c nop    


The fixup routine gets the type of the object and examines all the parent classes to determine the correct offset for accessing the slot in this object. It then generates x86 machine code and patches the calling function. At the moment I do this by directly emitting a byte sequence for each instruction, this is quite crude and error-prone but manageable when such a small amount of code is being generated.


              (write-byte! patch-start #x90)
              (write-byte! patch-start #x90)
              (write-byte! patch-start #x90)
              (write-byte! patch-start #x90)
              (write-byte! patch-start #x90)

              (write-word! patch-start #x08244c8b)
              (write-byte! patch-start #x81)
              (write-byte! patch-start #x39)
              (write-word! patch-start obj-type)
                                        ;je                                                                                                                              
              (write-byte! patch-start #x74)
              (write-byte! patch-start #x7)
...

First the old call is overwritten with nops and then we emit some comparison and jump instructions. The final output looks like this:


0x080ce733 mov    %ebx,0xc(%esp)
0x080ce737 mov    %edx,0x8(%esp)
0x080ce73b mov    %ecx,0x4(%esp)
0x080ce73f mov    %eax,(%esp)
0x080ce742 nop    
0x080ce743 nop    
0x080ce744 nop    
0x080ce745 nop    
0x080ce746 nop    
0x080ce747 mov    0x8(%esp),%ecx
0x080ce74b cmpl   $0x8302993,(%ecx)
0x080ce751 je     0x80ce75a 
0x080ce753 call   0x80ab8ba 
0x080ce758 jmp    0x80ce760 
0x080ce75a mov    0x4(%ecx),%eax
0x080ce760 mov    %eax,%eax
0x080ce762 nop    
0x080ce763 nop    
0x080ce764 nop    
0x080ce765 nop    

The untagged object pointer is moved into %ecx and the first word (which points to the class wrapper for this object) is compared with the literal address of the class wrapper seen the first time. If it is the same, we simply load the slot at the precomputed offset (0x4) and store it in %eax. If not we jump to a runtime function which does a conventional (but much slower) lookup.

Macros and dynamic compilation

I have released the next version of my Church-State system at:

http://subvert-the-dominant-paradigm.net/~jewel/church/church-alpha-1.tar.gz.

This release adds proper Church macros and dynamic compilation to the system. In this example, a macro for moving an immediate value to a register


macro MOVLir imm reg
        <<
                _Or_L 0xb8 (_r4 \reg) \imm

>>

you can see quoted code wrapped in angular brackets and backslashes used to unquote values that must be inserted into the template.

The macro syntax is not final (like most of Church's syntax) and it should look cleaner as it matures.

To handle macros like these the system is now capable of taking Church source code parsed out of the macro body, compiling it to low-level State code and then compiling and linking the resulting machine code into the running image.

This involves resolving low level symbols (I use dlsym) and modifying the Church dispatch tables and macro rewriters to use the new macros. I also have to run an initialization function which sets up the "code vector" with constant values (interned symbols etc).

Now that I have dynamic compilation working, it should be fairly easy to add a REPL to the system.

Performance

As part of this release I have also disabled a lot of optimizations that I had worked on before. These include inline caches for method dispatch and slot access. The reason I have disabled these optimizations is that they cost too much in terms of space compared to the benefit they provide in improved speed.

I'm now pursuing a new approach which uses "class wrappers" marked with random seeds. The idea is that these seeds can be used to hash into lookup tables which memoize the effective method for dispatch. I hope to be able to incorporate these ideas plus others from contemporary implementations (of javascript vms etc) to make the system substantially faster.

Church release

I’m proud to have reached the stage where my Church-State system can compile itself (ie the compiler is bootstrapped).

I have made the first alpha release available at:

http://subvert-the-dominant-paradigm.net/~jewel/church/church-alpha-0.tar.gz

To try it out you’ll need a 32-bit x86 linux system with “ld” installed. (Usually ld will be installed if you’ve installed something like gcc).

There are two simple test files mentioned in the README and there are also instructions for bootstrapping the system.

One thing missing from the release is a compiler that compiles the output from the OMeta parser generator to Church files. That means it’s not possible to change the grammars just yet.

Another incomplete feature is that Church and State macros are hard-coded into the compiler. If you look at church-pass1.church and state-pass1.church you’ll see the various hard-coded macros (some of which are quite complex). To be able to include these macros in the source files where they are used I need to be able to dynamically compile and load church code. I’ve completed the first step of this process, see state-dynamic.church and church-test-dynamic-alloc.church for working code that can compile a church file down to native code, allocate memory for it and link it into the running image.

Once I have Church macros working, I plan to rewrite a lot of assembler-i386.church to use macros instead of functions for emitting machine instructions. I think that this will dramatically improve compilation times. While preparing for this release I did a lot of work on performance, even removing array bounds checking and some other safety checks to make it faster. Currently the system bootstraps in 90 seconds on my laptop, but my goal is to be 2 or 3 times as fast.

Boehm GC

Up until now I have managed to make do without a garbage collector. My development machine has 4 gigabytes of RAM, which allowed me to compile all the test cases, but was not large enough for bootstrapping. I thought it would be fairly easy to plug in the Boehm garbage collector since my system is so simple, but I still encountered a few problems:

  • The data objects in the ELF objects emitted by my compiler were not word-aligned. I had set the addralign field for the .data segment to 4, but the objects weren’t aligned. This might have caused the Boehm garbage collector to miss pointers in these objects.
  • Previously I was using malloc, which on my system returns objects aligned to an 8-byte boundary. Church relies on this because it uses the lower 3 bits of a word as a tag. To get the Boehm gc to align allocated memory meant compiling it with the -DALIGN_DOUBLE flag. I couldn’t figure out how to do this with the latest 7.1 release, so I used the 6.8 release instead
  • For some reason, if small objects are allocated they aren’t aligned either, so I have to request a minimum of 8 bytes per allocation

OMeta2 (OMeta in Common Lisp)

Since my last post about my implementation of OMeta in Common Lisp I have updated it to use a newer syntax. This implementation is called “ometa2” for lack of a better name.

I have been using it to generate the Church parser for my Church and State project.

The previous version used to pass some arguments on the “OMeta stack” and in the new version I rewrote it to avoid this overhead. This meant creating an “ometa-input-stream” and a “list-input-stream” which allows reading objects from a list as if it was a stream.

The new version also uses a hash-table for memoization, which improves performance.

There are still however some performance concerns, the generated parsers contain too many function/lambda applications and some kind of optimization to reduce these will make the parsers much faster. For now the performance is sufficient for my work and reasonable considering the simple implementation.

You can get the code here:

http://subvert-the-dominant-paradigm.net/repos/hgwebdir.cgi/ometa2

Bootstrap progress

I have completed the port of the State compiler and the ELF writer. Still remaining is most of the Church compiler and some x86 machine instructions.

The ported system is in the “genesis” subdirectory (click on “files” to see the directory tree)

http://subvert-the-dominant-paradigm.net/repos/hgwebdir.cgi/bootstrap/

Update (Jan 15th): I have ported the whole system such that it can compile the Church runtime code and generate executables. The work remaining is to debug the generated code until it is able to compile and run the whole Church/State system.

Update (Jan 20th): All the Church test cases are working with the ported codebase.