New OMeta interpreter for Church-State

This is a new release of Church-State

This incorporates a new OMeta interpreter that I have written to replace the parser generator used in previous versions.

Implementing the Church parser with an interpreter rather than with a parser generator has two main benefits:

  • There is no longer a requirement to have the church compiler available to generate a new parser. This caused some bootstrapping issues and made it harder to provide user-extensible parsing.
  • The new approach is faster than before. Usually interpreters have more overhead than compiled equivalents but in this case the compilation overhead was very large for the generated church parser. Overall it’s faster to parse the church grammar file into an intermediate structure and interpret it than it is to generate and compile a very large generated parser.

The new system bootstraps in 28 seconds now (versus 34 seconds previously).

Grammar actions

The most novel part of this new interpreter implementation is the way that I handle “grammar actions”.

Previously I implemented both a parser generator and an interpreter which used runtime code compilation to turn grammar actions into executable closures.

Take this example of the grammar rule for an assignment:

assignment = assignable-expression:lhs ws* "=" ws* expression(nil):rhs -> <<`[_assign ,lhs ,rhs]>>

which would parse the following Church code:

a = 1

In some parsing frameworks it is convenient to automatically construct a parse tree from a given grammar. In the case of scannerless parsers (which is what I use in Church) we would like to elide the nodes that would result from parsing whitespace. So in the example above the parts that we want to keep are labeled lhs and rhs. The section at the end describes a quasiquoted list template that has the values of lhs and rhs substituted into it.

When evaluated, this action will return something like this:
[_assign [_var "a"] [_number 1]]


To implement this one can create a closure with the following code:

fn arg1 arg2 -- `[_assign ,arg1 ,arg2]

which is called at parse time with the values for each labeled part.

This requires having the entire compiler framework available at parse time, including the parser which you are implementing!

To avoid this kind of bootstrapping issue and to simplify the parser, I decided to implement grammar actions with a simple quasiquoting syntax.

Quasiquoted actions

In the grammar actions implemented in the latest Church-State parser it is only possible to quote a list or symbol, unquote a variable or a list and specify a literal.

For example the following grammar rule:

pattern = cname:name ws* (ws* pattern-argument)*:args -> <<`(_pattern ,name ,@args)>>

creates a list starting with the symbol _pattern followed by the pattern name and then containing all the elements of the args list.

With closures it is possible to use list functions like cons, first and rest or to call routines to convert strings of digits into integers. This is not possible with quasiquoted actions and they require a later pass of the parse tree to do the necessary conversions. I have added routines to perform these transformations so that the resulting parse tree is exactly the same as that returned by the previous parser implementations.

The interpreter can be seen here:

along with the grammar for parsing ometa grammars:

Church alpha 5

This is a new release of Church-State.

To do a bootstrap, type make bootstrap1 and make bootstrap2.

To test the new ometa interpreter, run make ometa-test-parser and run it with ./bin/ometa-test-parser test.g test.txt

This release contains the following new items:

  • A new OMeta interpreter (Although this new interpreter is capable of parsing the full Church grammar, I’m still keeping the old OMeta parser generator for bootstrapping reasons)
  • Optimized dispatch structures for faster dispatch when inline caching is not effective
  • Improved inline caches

Climbing trees

Here is a solution to an ACM programming problem requiring one to analyse trees describing parental relationships.

I wrote this in Maude even though the classic tool for this kind of problem is Prolog.

The solution
The output

We start by importing integers and quoted identifiers (symbols):

protecting INT .
protecting QID .

Next we define the sorts used in the program. Rel means any kind of relationship, NamePair is a just pair of names. The sorts Parent, Sibling and Cousing are all relationship types.

sort Rel .

sorts NamePair .
sorts Parent Sibling Cousin RelType .
subsorts Parent Sibling Cousin < RelType .

The following operators (or constructors) are used to construct various relationship facts.

op Parent : Nat -> Parent .
op Cousin : Nat Nat -> Cousin .
op Sibling : -> Sibling .

op pair : Qid Qid -> NamePair .
op rel : NamePair RelType -> Rel .
op parent : Qid Qid -> Rel .

op Empty : -> Rel [ ctor ] .
op __ : Rel Rel -> Rel [ ctor assoc comm id: Empty ] .

The problem set is initialized with the following set of parent-child facts

op init : -> Rel .
eq init =
parent(', 'oswald.veblen)
parent('stephen.kleene, '
parent('dana.scott, '
parent('martin.davis, '
parent('pat.fischer, '
parent('mike.paterson, 'david.park)
parent('dennis.ritchie, 'pat.fischer)
parent(', '
parent('les.valiant, 'mike.paterson)
parent('bob.constable, 'stephen.kleene)
parent('david.park, ' .

We use the following variables to name various objects in our rules:

vars A B : Qid .
vars X Y Z : Qid .

vars N M O P : Nat .
vars R : RelType .
vars REST : Rel .

These four rules describe the different relationship types:

---- Note that the input is in (child, parent) format, but we want to output (parent, child)

rl [ parent ] : parent(A, B) => rel( pair(B, A), Parent(0)) .

rl [ grandparent ] :
rel( pair(X, Y), Parent(0))
rel( pair(Y, Z), Parent(N)) => rel( pair(X, Z), Parent(N + 1)) .

---- sibling is reflexive

rl [ sibling ] :
rel( pair(X, Y), Parent(0))
rel( pair(X, Z), Parent(0)) => rel( pair(Y, Z), Sibling) rel( pair(Z, Y), Sibling) .

---- check least ancestor
rl [ cousin ] :
rel( pair(X, Y), Parent(N))
rel( pair(X, Z), Parent(M)) => rel( pair(Y, Z), Cousin(min(N,M), abs(N - M))) .

These search expressions describe the input queries (we could make an effort to parse these properly instead)

search [1] in CLIMBING : init =>+ rel (pair ('stephen.kleene, 'bob.constable), R) REST .
search [1] in CLIMBING : init =>+ rel (pair (', 'stephen.kleene), R) REST .

----- search [1] in CLIMBING : init =>+ rel (pair ('les.valiant, ', R) REST .
----- swap this query to search for father instead of child

search [1] in CLIMBING : init =>+ rel (pair (', 'les.valiant), R) REST .

search [1] in CLIMBING : init =>+ rel (pair ('les.valiant, 'dennis.ritchie), R) REST .
search [1] in CLIMBING : init =>+ rel (pair ('dennis.ritchie, 'les.valiant), R) REST .
search [1] in CLIMBING : init =>+ rel (pair ('pat.fischer, 'michael.rabin), R) REST .


Loop macros

I have released the latest version of Church-State here:

You can also get the source code via mercurial:

hg clone

or browse it.

In the previous version of Church loop constructs were expanded in the Church compiler directly to the State “tagbody” form. I have removed this code and implemented it as a Church macro which expands to a new Church “tagbody” form.

The aim behind this is to make it easier to do optimization and flow analysis on Church code in the future.

User extensible parsing

I have been experimenting with user-extensible parsers in Church, even though I don’t have a use for them yet.

I added a new language construct called “extend-grammar”, ie:

extend-grammar ometa testg {
test-rule = "test" ws+ cname:a -> << 42 >>

This test-rule will match the string “test”, followed by a name. The rule will ignore its input and return the value 42.

This grammar is processed during parse time, converted to church code and dynamically compiled and linked into the running process.

At present this happens after the whole file is parsed, so it’s currently not possible to add a new grammar rule and use it in the same source file.

To activate the rule I added another construct, “eval-when”, ie:

eval-when compile load
      church-add-parser-extension 'test-rule     

What this does is to execute this code when this file is compiled (load doesn’t work yet).

In this example we add ‘test-rule to the list of ometa functions to be called by a special new grammar rule called ‘user-form.

The new version of ‘user-form is compiled and linked into the process, immediately making it available to the parser.

For this test case I then load the following file:

        print "in dotest"
        print (test notanumber)

which prints 42 because the parser has intercepted what would normally be a method call.

Rude repl

I have added eval and a repl (read eval print loop) to Church.

> + 3 4
> length "foobar"
> eval "* 19 3"
> load "church/test/"
> (main)

Up to now I have only been using Church as a command-line compiler which produces executable files. Yet I have always preferred interactive language environments (lisp, smalltalk, python etc) to stop-compile-run languages.

All the machinery for writing a repl has been in Church for a while, including

  • Church parser available as a library
  • Church compiler available at runtime
  • Machine code generator and dynamic linker available at runtime
  • The ability to modify the dispatch table at runtime

This makes the repl easy to implement:

                        rstr = (read-line)
                        if (null? rstr)
                                return-from repl nil
                                print (eval rstr)

Eval is a little bit more messy because the compiler is designed to compile a whole method at a time and doesn’t know what to do with variables that are not either a global or local variable.

To implement eval I wrapped the eval string in a lambda that I return from a method which gets run after compilation.

   eval-compiled-function = (fn -- eval_str) 

This works for certain language expressions, but does not presently provide the ability to assign to “repl” variables.

Another drawback is that multiline statements are currently not possible (unlike python and lisp).

In the future I hope to experiment with a SLIME-like extension to emacs which communicates to Church across a socket, allowing interactive evaluation and compilation of source code from within the editor.

Prolog in Church

As a step towards implementing an experimental type inferencing or type checking system I have built a prolog interpreter in Church. The interpreter is based on the example in Peter Norvig’s book, “Paradigms of Artificial Intelligence Programming”.

As a first step I wrote the grammar for parsing prolog programs. Here is a sample ‘parent’ program describing parent and grandparent relationships between Debian derivatives:


parent(debian, ubuntu).

grandparent(X,Z) :- parent(X,Y), parent(Y,Z).

and here is the OMeta grammar for parsing prolog programs:

ometa prolog <: ometa {

comment = "/*" (~cnewline anything)* cnewline -> << '_comment >>,

ws = $  | $	,
wsnl = ws | cnewline,

prolog-top-levels = (comment | prolog-rule)*,

prolog-rule = wsnl* prolog-clause:rhead ws* (":-" ws* prolog-or:body -> << body >>)* ws* $. wsnl*  -> <<`[rule ,rhead ,body]>>,

prolog-or = listof("prolog-impl", ";"):e  wsnl* -> << `[or ,e] >>,
prolog-impl = wsnl* prolog-clause:a wsnl* "->" wsnl* prolog-or:b wsnl* -> << `[implies ,a ,b] >> | prolog-and,
prolog-and = wsnl* listof("prolog-expr", ","):e wsnl* -> << `[and ,e] >>,

prolog-clause = wsnl* prolog-name:name wsnl* prolog-arg-list:args wsnl* -> <<`[,name | ,args]>>,
prolog-arg-list = $( listof("prolog-expr", ","):args wsnl* $) wsnl* -> << args >>,

prolog-expr = wsnl* (
	         $[ wsnl* $] wsnl* -> << `[]>> |
	      	 $[ wsnl* listof("prolog-expr", ","):list-head ws* ($| wsnl* prolog-expr)*:list-tail wsnl* $] wsnl* -> << (append! list-head (if list-tail (first list-tail) nil)) >> |
	      	 $( wsnl* prolog-or:e wsnl* $) wsnl* -> << e >> |
		 prolog-clause:c wsnl* -> << c >> |
                 prolog-number |
                 prolog-variable:v wsnl* -> << v >>),

prolog-number = "-"*:sign digit+:d -> << (convert-number sign d)>>,

prolog-variable = (letter | digit | $_)+:l -> << (intern (coerce l 'string)) >>,

prolog-name = (letter | digit | $_)+:l -> << (intern (coerce l 'string)) >>  }

The core of prolog lies in the unification and backtracking algorithms.

Unification will try to match its two inputs or else bind a variable to the corresponding value:

unify x y bindings
		(eq? bindings prolog-fail) prolog-fail
		(eq? x y) bindings
		(eq? x '_) bindings
		(eq? y '_) bindings
		(variable-symbol? x) (unify-variable x y bindings)
		(variable-symbol? y) (unify-variable y x bindings)
		(and (cons? x) (cons? y)) (unify (rest x) (rest y) (unify (first x) (first y) bindings))
		true prolog-fail

unify-variable var x bindings
		(get-binding var bindings) (unify (lookup var bindings) x bindings)
		(and (variable-symbol? x) (get-binding x bindings)) (unify var (lookup x bindings) bindings)
		(and occurs-check? (occurs-check var x bindings)) prolog-fail
		true (extend-bindings var x bindings)

Backtracking is achieved by allowing each clause to provide multiple solutions and trying all of these possibilities:

prove-all pi:prolog-interpreter goals bindings
		(eq? bindings prolog-fail) nil
		(null? goals) (list bindings)
			mapcan (fn goal1-solution
				prove-all pi (rest goals) goal1-solution
) (prove pi (first goals) bindings)

As a test I used this map coloring program from a prolog tutorial:

member(X,[_|List]) :- member(X,List).

adjacent(X,Y,Map) :-  member([X,Y],Map) ; member([Y,X],Map). 

find_regions([[X,Y]|S], R,A) :- 
 (member(X,R) ->  
  (member(Y,R) -> find_regions(S,R,A)     ; find_regions(S,[Y|R],A))  ; 
  (member(Y,R) -> find_regions(S,[X|R],A) ; find_regions(S,[X,Y|R],A) )). 

color(Map,Colors,Coloring) :-
color_all([R|Rs],Colors,[[R,C]|A]) :- 

conflict(Map,Coloring) :- member([R1,C],Coloring), 


which yields the following colourings:

[[map1 M] [color M [red green blue yellow] Coloring]]

M = [[1 2] [1 3] [1 4] [1 5] [2 3] [2 4] [3 4] [4 5]]
Coloring = [[5 red] [4 green] [3 red] [1 blue] [2 yellow]]
M = [[1 2] [1 3] [1 4] [1 5] [2 3] [2 4] [3 4] [4 5]]
Coloring = [[5 red] [4 green] [3 red] [1 yellow] [2 blue]]

You can browse the source files for the prolog interpreter here.


I have ported jonesforth to 64-bit x86 code.

jonesforth is a tutorial-style implementation of forth which explains in detail how the compiler and runtime is implemented. Porting the code to a slightly different assembly language helped me to think carefully about what each primitive does and about how they are used in the runtime code.

As noted in the jonesforth comments, the original advantage of using direct-threaded code on a 16-bit machine is that calling each word can be encoded in two bytes instead of three. That’s a savings of 33%. On 32-bit x86, it’s four bytes versus five, saving 20%. In my 64-bit implementation I chose to extend the 4-byte addresses to 8-byte words. This actually results in wasting space rather than saving it because on x86-64 calls and branches are usually encoded with a 5-byte instruction using relative displacement.

The port was fairly straightforward, I mostly just replaced 32-bit registers (eax, esp, esi etc) with the 64-bit equivalents (rax, rsp, rsi etc) and changed every reference to the word-size from 4 to 8.

The biggest difference is that syscalls use different registers on 64-bit linux and these registers can be clobbered during the call.

You can get the code from the mercurial repository (or browse it here):

hg clone

To compile it:

gcc -m64 -nostdlib -static -Wl,-Ttext,0 -Wl,--build-id=none -o jonesforth64 jonesforth64.S

To run it:

cat jonesforth64.f - | ./jonesforth64
1 2 3 4
4 3 2 1
3 2 4 1

I’ve tested most of the code in the .f file, but I haven’t yet implemented C strings, file-io or the built-in assembler.

I’ve tried to keep the comments intact, but haven’t updated them to reflect different word sizes or registers etc.