Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


Runtime typing and eval in Alice ML

I originally wtote this post three years ago but I wasn't happy with how it read so never finished it. It's been sitting around in draft making me feel guilty for too long so I've cleaned it up and published it.

I like the style of prototyping and programming that dynamic languages like Self promote. When building systems inside the language environment it feels like you are living in a soup of objects. Programming becomes creating and connecting objects to perform tasks while debugging and refactoring as you go. The animated image below shows a use of the Self environment to instantiate and run a VNC object I wrote for example. Other examples can be seen in screencasts in my Self language posts.

Recently I've been using more statically typed languages to explore the world of type safety and how it can improve correctness of programs. My series of ATS posts go through a lot of the features that this approach provides. Most of these languages promote an edit/compile/link/run style of development and I miss the live development and debugging in the dynamic environments.

Some of the statically typed functional programming languages provide ways of doing dynamic types. Alice ML, developed in mid-2000, was an extension of Standard ML which provided support for concurrent, distributed and constraint programming. It was an attempt to see what a statically typed functional version of the Mozart/Oz language would be like. Development stopped in 2007 with the release of version 1.4 of Alice ML but the system remains very useable. I had been following Alice ML since the early days of its development and the concurrency and distribution features of it were the inspiration for some of my explorations with using futures and promises in JavaScript and concurrency in Factor.

As part of the support for distributed programming it required the ability to serialize and deserialize values along with their types. This form of dynamic behaviour would seem to be useful for developing a live coding environment. In fact Alice ML includes a GUI editor and REPL written in Alice ML that makes use of the library to evaluate, compile and produce components and executables.

I've imported the source of Alice ML into a github repository with minor bitrot changes and a couple of bug fixes so that it builds on recent Linux and Mac OS X systems. The code there is from the original Alice ML source with many changes and fixes made by Gareth Smith in his bitbucket repository. The original Alice developers and Gareth have kindly allowed me to host this on github.


Alice ML does dynamic runtime typing through packages. A package encapsulates a module and its signature. The package is an opaque type and accessing the module stored within is only possible via an unpack operation which requires giving the explicit signature of the module stored. If the signature doesn't match the type of the module stored then a runtime exception occurs. Packages can be passed around as first class values, stored in files, sent to other processes, etc.

Packages are created using the pack expression. This follows the form:

pack structure_expression : signature_expression

Where structure_expression and signature_expression are expressions that evaluate to Standard ML structures and signatures. The following would create a package for the value 42 stored in a module (as typed in the Alice ML REPL):

> val p = pack struct val x:int = 42 end : sig val x:int end;
val p : package = package{|...|}

In the pack expression a structure is created with an x member of type int and the value of that is 42. This structure is the value that is stored in the package. The type of this is given by the signature expression and when later unpacked only this signature can be used to get at the value. For simple examples like this the struct and sig syntax is quite verbose but Alice ML allows shortening this to just the contents of the structure and signature. The following is the equivalent shortened code:

> val p = pack (val x:int = 42) : (val x:int);
val p : package = package{|...|}

Getting the value back from a package is done using unpack. The general form of the unpack expression is:

unpack expression : signature_expression

If the signature_expression does not match the signature of the value stored in the package then an exception is raised. The type of the unpack expression is signature_expression so if it successfully unpacks then use of the resulting value is type safe. Unpacking our example above looks like:

> structure S = unpack p : sig val x:int end;
structure S : sig val x : int end

Or using the shorter syntax:

> structure S = unpack p : (val x:int);
structure S : sig val x : int end

The resulting module S can be used as any other SML module to access the fields within:

> print (Int.toString S.x);


To create an environment that allows evaluating code and manipulating results requires an eval facility. Alice ML provides this through the Compiler module. This module provides, amongst other functions, the following variants of eval:

val eval :     string -> package
val evalWith : env * string -> env * package

The first function, eval takes a string of Alice ML code, evaluates it, and returns the result as a package. The second, evalWith, takes an additional parameter which is the environment which is used to evaluate the code within. It also returns the modified envrionment after evaluating the code. This allows keeping a persistent state of changes made by the evaluated code.

The result is returned as a package because the type of the evaluated code is unknown. It could be anything. If the caller of eval needs to manipulate or display the result in some manner it needs to unpack it with a known type that it expects it to contain and handle any exception that might occur if the type is incorrect at runtime. An example of doing this is:

> val x = Compiler.eval("1+2");
val x : package = package{|...|}
> structure X = unpack x : (val it:int);
structure X : sig val it : int end
val it : int = 3

In this case the result of our evaluation is an int so this is what's used in the signature for the unpack expression.

An example using evalWith to track the changes to the environment is:

> val x = Compiler.evalWith(Compiler.initialEnv,
                            "fun fac(n:int) = if n <= 1 then 1 else n * fac(n - 1)");
val x : Compiler.env * package = (_val, package{|...|})
> val y = Compiler.evalWith(#1 x, "fac(10)");
val y : Compiler.env * package = (_val, package{|...|})
> structure Y = unpack (#2 y) : (val it:int);
structure Y : sig val it : int end
val it : int = 3628800

The function evalWith returns a tuple where the first element is the resulting environment after the evaluation and the second element is the package containing the result. For the second call to evalWith the environment resulting from the first call is passed to it so the function fac can be found.

Pretty Printing

One thing to note in the previous example is that the call to unpack required knowing the type of what we were unpacking. This is usually the case but when writing a REPL we need to print the result of evaluating what is entered at the top level - and this could be any type depending on what the user entered to evaluate.

There are some internal Alice ML modules that make it possible to do this. An example follows:

> import structure Reflect     from "x-alice:/lib/system/Reflect";
> import structure PPComponent from "x-alice:/lib/system/PPComponent";
> import structure PrettyPrint from "x-alice:/lib/utility/PrettyPrint";
> val a = Compiler.prepareWith (Compiler.initialEnv, "1+2");
val a : Compiler.env * (unit -> package) * t = (_val, _fn, _val)
> val b = (#2 a) ();
val b : package = package{|...|}
> val c = Reflect.reflectPackage b;
val c : Reflect.module * t = (_val, _lazy)
> val d = PPComponent.ppComp(#1 c, #2 c);
val d : doc = _val
> PrettyPrint.toString(d,40);
val it : string = "val it : int = 3"

The Compiler.prepareWith function does not evaluate the string passed to it but performs part of the step of evaluation. It returns a tuple containing the environment which will result from evaluation, a function that when called will perform the evaluation, and a value representing the type of the result of the evaluation.

In step (b) the evaluation function is called which returns the package containing the result. Reflect.reflectPackage returns a tuple describing the package. These are passed to PPComponent.ppComp to return a PrettyPrint document. The pretty printer is based on A prettier printer by Phil Wadler. PrettyPrint.toString converts this to a string which could then be displayed by a REPL.


As mentioned previously the Alice ML tools are written in Alice ML. The toplevel code uses the modules and functions outlined previously to implement the REPL and IDE. Unfortunately it's mostly undocumented but the source is available to show how it is implemented and used.

There's much more to Alice ML run time use of types, including pickling, components, sandboxing, and distribution .

An interesting exercise would to to write a web based client to provide a "Try Alice ML" in a similar manner to other languages online playgrounds to allow trying Alice ML code snippets without needing to install it. I'd also like to explore how close to a Self like environment could be done in an Alice ML system.

Tags: aliceml 


Distributed Wikipedia Mirrors in Freenet

There was a recent post about uncensorable Wikipedia mirrors on IPFS. The IPFS project put a snapshot of the Turkish version of Wikipedia on IPFS. This is a great idea and something I've wanted to try on Freenet.

Freenet is an anonymous, secure, distributed datastore that I've written a few posts about. It wasn't too difficult to convert the IPFS process to something that worked on Freenet. For the Freenet keys linked in this post I'm using a proxy that retrieves data directly from Freenet. This uses the SCGIPublisher plugin on a local Freenet node. The list of whitelisted keys usable are at There is also a gateway available at The keys can also be used directly from a Freenet node, which is likely to be more performant than going through my underpowered proxy. Keep in mind that the "distributed, can't be taken down" aspect of the sites on Freenet is only when accessed directly through Freenet. It's quite likely my clearnet proxy won't be able to handle large amounts of traffic.

I started with the Pitkern/Norfuk Wikipedia Snapshot as that was relatively small. Once I got the scripts for that working I converted the Māori Wikipedia Snapshot. The lastest test I did was the Simple English Wikipedia Snapshot. This was much bigger so I did the version without images first. Later I plan to try the version with images when I've resolved some issues with the current process.

The Freenet keys for these mirrors are:

  • USK@m79AuzYDr-PLZ9kVaRhrgza45joVCrQmU9Er7ikdeRI,1mtRcpsTNBiIHOtPRLiJKDb1Al4sJn4ulKcZC5qHrFQ,AQACAAE/simple-wikipedia/0/
  • USK@jYBa5KmwybC9mQ2QJEuuQhCx9VMr9bb3ul7w1TnyVwE,OMqNMLprCO6ostkdK6oIuL1CxaI3PFNpnHxDZClGCGU,AQACAAE/maori-wikipedia/5/
  • USK@HdWqD7afIfjYuqqE74kJDwhYa2eetoPL7cX4TRHtZwc,CeRayXsCZR6qYq5tDmG6r24LrEgaZT9L2iirqa9tIgc,AQACAAE/pitkern-wikipedia/2/

The keys are 'USK' keys. These keys can be updated and have an edition number at the end of them. This number will increase as newer versions of the mirrors are pushed out. The Freenet node will often find the latest edition it knows about, or the latest edition can be searched for using '-1' as the edition number.

The approach I took for the mirroring follows the approach IPFS took. I used the ZIM archives provided by Kiwix and a ZIM extractor written in Rust. The archive was extracted with:

$ extract_zim wikipedia_en_simple_all_nopic.zim

This places the content in an out directory. All HTML files are stored in a single directory, out/A. In the 'simple english' case that's over 170,000 files. This is too many files in a directory for Freenet to insert. I wrote a script in bash to split the directory so that files are stored in '000/filename.html' where '000' is the first three digits of a SHA256 hash of the base filename, computed with:

$ echo "filename.html"|sha256sum|awk '{ print $1 }'|cut -c "1,2,3"

The script then went through and adjusted the article and image links on each page to point to the new location. The script does some other things to remove HTML tags that the Freenet HTML filter doesn't like and to add a footer about the origin of the mirror.

Another issue I faced was that filenames with non-ascii characters would get handled differently by Freenet if the file was inserted as a single file vs being inserted as part of a directory. In the later case the file could not be retrieved later. I worked around this by translating filenames into ascii. A more robust solution would be needed here if I can't track down where the issue is occurring.

This script to do the conversion is in my freenet-wikipedia githib repository. To convert a ZIM archive the steps are:

$ wget
$ extract_zim wikipedia_pih_all.zim
$ ./
$ ./ result my-mirror index.html

At completion of the insert this will output a list of keys. the uri key is the one that can be shared for others to retrieve the insert. The uskinsert key can be used to insert an updated version of the site:

$ ./ result my-mirror index.html <uskinsert key>

The script was a quick 'proof of concept' hack and could be improved in many ways. It is also very slow. It took about 24 hours to do the simple english conversion. I welcome patches and better ways of doing things.

The repository includes a bash script,, which will insert the site using the Freenet ClientPutDiskDir API message. This is a useful way to get a directory online quickly but is not an optimal way of inserting something the size of the mirror. The initial request for the site downloads a manifest containing a list of all the files in the site. This can be quite large. It's 12MB for the Simple English mirror with no images. For the Māori mirror it's almost 50MB due to the images. The layout of the files doesn't take into account likely retrieval patterns. So images and scripts that are included in a page are not downloaded as part of the initial page request, but may result in pulling in larger amounts of data depending on how that file was inserted. A good optimisation project would be to analyse the directory to be inserted and create an optimal Freenet insert for faster retrieval. pyFreenet has a utility, freesitemgr, that can do some of this and there are other insertion tools like jSite that may also do a better job.

My goal was to do a proof of concept to see if a Wikipedia mirror on Freenet was viable. This seems to be the case and the Simple English mirror is very usable. Discussion on the FMS forum when I announced the site has been positive. I hope to improve the process over time and welcome any suggestions or enhancements to do that.

What are the differences between this and the IPFS mirror? It's mostly down to how IPFS and Freenet work.

In Freenet content is distributed across all nodes in the network. The node that has inserted the data can turn their node off and the content remains in the network. No single node has all the content. There is redundancy built in so if nodes go offline the content can still be fully retrieved. Node space is limited so as data is inserted into Freenet, data that is not requested often is lost to make room. This means that content that is not popular disappears over time. I suspect this means that some of the wikipedia pages will become inaccessible. This can be fixed by periodically reinserting the content, healing the specific missing content, or using the KeepAlive plugin to keep content around. Freenet is encrypted and anonymous. You can browse Wikipedia pages without an attacker knowing that you are doing so. Your node doesn't share the Wikipedia data, except possibly small encrypted chunks of parts of it in your datastore, and it's difficult for the attacker to identify you as a sharer of that data. The tradeoff of this security is retrievals are slower.

In IPFS a node inserting the content cannot be turned off until that content is pinned by another node on the network and fully retrieved. Nodes that pin the content keep the entire content on their node. If all pinned nodes go offline then the content is lost. All nodes sharing the content advertise that fact. It's easy to obtain the IP address of all nodes that are sharing Wikipedia files. On the positive side IPFS is potentially quite a bit faster to retrieve data.

Both IPFS and Freenet have interesting use cases and tradeoffs. The intent of this experiment is not to present one or the other as a better choice, but to highlight what Freenet can do and make the content available within the Freenet network.

Tags: freenet 


Installing GNAT and SPARK GPL Editions

GNAT is an implementation of the Ada programming language. SPARK is a restricted subset of Ada for formally verifying programs. It provide features comparable to languages like Rust and ATS. A recent article comparing SPARK to Rust caught my eye and I decided to spend some time learnig Ada and SPARK. This post just outlines installing an implementation of both, a quick test to see if the installation worked, and some things to read to learn. I hope to post more later as I learn more.


Download GNAT GPL from Choose "Free Software or Academic Development" and click "Build Your Download Package". Select the platform and click the checkboxes next to the required components. For my case I chose them all but "GNAT Ada 2016" and "Spark 2016" are the main ones I needed.

To install Ada and SPARK from the downloaded tar file:

$ tar xvf AdaCore-Download-2017-04-27_0537.tar
$ cd x86_64-linux/adagpl-2016/gnatgpl
$ mkdir ~/ada
$ tar -xf gnat-gpl-2016-x86_64-linux-bin.tar.gz
$ cd gnat-gpl-2016-x86_64-linux-bin
$ ./doinstall
...answer prompts about where to install...
...for this example I used /home/username/gnat...
$ export PATH=/home/username/gnat/bin:$PATH

$ cd ../sparkgpl
$ tar -xf spark-gpl-2016-x86_64-linux-bin.tar.gz
$ cd spark-gpl-2016-x86_64-linux-bin
$ ./doinstall
...answer prompts about where to install... should pick up the location used above...

Be aware that the install comes with its own gcc and other utilities. By putting it first in the PATH they are used over the systems versions.

Testing GNAT

The following is a "Hello World" application in Ada:

with Ada.Text_IO; use Ada.Text_IO;
procedure Hello is
  Put_Line ("Hello World!");
end Hello;

It imports a package, Ada.Text_IO, and uses it so the package contents can be used without prefixing them with the package name. A procedure called Hello is created that outlines a line of text. If put in a file hello.adb it can be compiled with:

$ gnatmake hello.adp
gnatbind -x hello.ali
gnatlink hello.ali

$ ./hello
Hello World!

Completely static executables can also be created:

$ gnatmake hello.adb -bargs -static -largs -static
$ ldd hello
not a dynamic executable
$ ./hello
Hello World!

Testing SPARK

I used an example taken from Generating Counterexamples for failed Proofs. The SPARK checker, gnatproof, requires a project file. This is the contents of saturate.gpr:

project Saturate is
   for Source_Dirs use (".");

   package Compiler is
      for Default_Switches ("Ada") use ("-gnatwa");
   end Compiler;
end Saturate;

It gives the project name, Saturate, the location to search for source files (the current directory), and any compiler switches. The function to be implemented is a saturation function. It ensures a value given to it is in a specific range. In this case, a non-negative value less than or equal to 255. In file we put the interface definition:

with Interfaces;
use Interfaces;

function Saturate (Val : Unsigned_16) return Unsigned_16 with
  Post => Saturate'Result <= 255 and then
         (if Val <= 255 then Saturate'Result = Val);

The code first pulls the Interfaces package into the current namespace. This provides unprefixed access to Unsigned_16. It declares a function, Saturate, that takes an Unsigned_16 as an argument and returns the same type. The SPARK_Mode is an annotation that identifes code to be checked by SPARK. The Post portion is a postcondition that the implementation of the function must adhere to. In this case the result must be less than 255 and if the given value is less than 255 then the result will be equal to the value.

The implementation of the function is in a file saturate.adb:

function Saturate (Val : Unsigned_16) return Unsigned_16 with
  return Unsigned_16'Max (Val, 255);
end Saturate;

This calls the Max function for Unsigned_16 types to return the maximum between the given value and 255. The code compiles with the Ada compiler:

$ gnatmake saturate.adb
gcc -c saturate.adb

It fails however when running the SPARK checker:

$ gnatprove -Psaturate 
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ... medium: postcondition might fail (e.g. when Saturate'Result = 255 and Val = 0)
Summary logged in gnatprove/gnatprove.out

This tells us that the postcondition might fail if the given value to the function is 0 and the result is 255. This is because we are using Max - given the value 0 to Saturate, the Max of 0 and 255 is 255. The function result will be 255. The postcondition however states that the result should be equal to val - it should be 0. Changing the function call to Min fixes it:

$ gnatprove -Psaturate 
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
Summary logged in gnatprove/gnatprove.out

Having a postcondition that states what the result should be is probably unlikely in a lot of code. If the signature was the following, would SPARK find the error still?:

function Saturate (Val : Unsigned_16) return Unsigned_16 with
  Post => Saturate'Result <= 255

$ gnatprove -Psaturate 
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ... medium: postcondition might fail,
         cannot prove Saturate'Result <= 255 (e.g. when Saturate'Result = 256)
Summary logged in gnatprove/gnatprove.out

Apparently so. Now it identifies that the result can be 256. Other examples following different contracts on the function are in the original article.


The GNAT User's Guide for Native Platforms and Spark 2014 User's Guide contains the instructions for the main tools. GNAT can interface with C and C++. There is a full list of documentation here. Two useful books covering Ada and Spark:

Some technical papers that give a quick overview of Ada:

I used the command line tools here but there is a gps command which is a full graphical IDE which may be more approachable. I'm looking forward to using Ada and SPARK and seeing how they compare to tools like Rust and ATS.

Tags: ada  spark 


Shen Language Port for Wasp Lisp

This post intersects two of my favourite lispy languages. Shen is a functional programming language with a number of interesting features. These include:

  • Optional static type checking
  • Pattern matching
  • Integrated Prolog system
  • Parsing libraries

I've written about Shen Prolog before which gives a bit of a feel for the language.

Wasp Lisp is a small Scheme-like lisp with lightweight concurrency and the ability to send bytecode across the network. It's used in the MOSREF secure remote injection framework. I've written a number of posts about it.

A feature of Shen is that it is designed to run on top of a lighter weight lisp called KLambda. KLambda has only about 46 primitives, many of which already exist in lisp systems, making it possible to write compilers to other languages without too much work. There exist a few Shen ports already. I wanted to port Shen to Wasp Lisp so I can experiment with using the pattern matching, Prolog and types in some of the distributed Wasp code I use.

Wasp Lisp is not actively developed but the author Scott Dunlop monitors the github repository and processes pull requests. Shen requires features that Wasp Lisp doesn't currently support, like real numbers. I maintain a fork on github that implements the features that Shen needs and any features that apply back to core Wasp Lisp I'll upstream.

This port is heavily based on the Shen Scheme implementation. Much of the code is ported from Scheme to Wasp Lisp and the structure is kept the same. The license for code I wrote is the same as the Shen Scheme License, BSD3-Clause.

The Shen Source is written in the Shen language. Using an existing Shen implementation this source is compiled to Klambda:

$ shen-chibi
(0-) (load "make.shen")
(1-) (make)
compiling ...

To port to another language then becomes writing a KLambda interpreter or compiler. In this case it's a compiler from KLambda to Wasp Lisp. Implementing the primitives is also required but there aren't many of them. Some of the characters that KLambda uses in symbols aren't compatible with the Wasp reader so I used an S-expression parser to read the KLambda code and then walked the tree converting expressions as it went. This is written in Wasp code, converted from the original Scheme. In hindsight it probably would have been easier to write this part in Shen and bootstrap it in another Shen instance to make use of Shen's parsing and pattern matching libraries.

Shen makes heavy use of tail calls in code meaning some form of tail call optimisation is needed to be efficient. In a previous post I mentioned some places where Wasp doesn't identify tail calls. These are cases Shen hit a lot, causing performance issues. I made some changes to the optimizer to identify these cases and it improved the Shen on Wasp runtime performance quite a bit.

Current Port State

This is a very early version. I've only just got it working. The Shen tests pass with the exception of the Proof Assistant test which hangs when loading.

Note 2017-04-26: The bug with the proof assistant test not passing is now fixed. It was caused by an integer overflow when computing complexities within the Shen prolog code. Wasp integers are smaller than other Shen implementations which is why none of them hit the issue. The binaries have been updated with this fix.

The port is slower than I'd like - about half the speed of the Shen C interpreter and significantly slower than Shen Scheme and Shen on SBCL. I've done some work on optimizing tail calls in the fork of the Wasp VM for Shen but there's much more work on the entire port that could improve things.


The following compiled binaries are available:

shen_static.bz2. This is a static 64-bit linux binary with no dependancies. It should run on any 64-bit Linux system. Decompress with:

$ bunzip2 shen_static.bz2
$ chmod +x shen_static
$ ./shen_static

shen_macos.bz2. 64-bit binary for Mac OS. Decompress with bunzip2 as above. The zip file contains a Windows 64-bit binary, shen.exe. It should run on any modern 64-bit Windows system.


First step, build the fork of Wasp Lisp needed to run:

$ git clone --branch shen wasp-shen
$ cd wasp-shen
$ make install

Follow the prompts for the location to install the wasp lisp binaries and add that bin directory of that location to your path:

$ export PATH=$PATH:/path/to/install/bin

Shen is provided in source code format from the Shen Sources github repository. The code is written in Shen. It needs a working Shen system to compile that code to KLambda, a small Lisp subset that Shen uses as a virtual machine.

This KLamda code can be found in the kl directory in the shen-wasp repository. These KLambda files are compiled to Wasp Lisp and stored as compiled code in the compiled directory. The shen wasp repository includes a recent version of these files. To generate, or re-generate, run the following commands:

$ git clone
$ cd shen-wasp
$ rlwrap wasp
>> (import "driver")
>> (compile-all)
Compiling toplevel.kl
Compiling core.kl
Compiling sys.kl
Compiling sequent.kl
Compiling yacc.kl
Compiling reader.kl
Compiling prolog.kl
Compiling track.kl
Compiling load.kl
Compiling writer.kl
Compiling macros.kl
Compiling declarations.kl
Compiling types.kl
Compiling t-star.kl

This will create files with the Wasp Lisp code in the compiled/*.ms files, and the compiled bytecode in compiled/*.mo files.

Creating a Shen executable can be done with:

$ waspc -exe shen
$ chmod +x shen
$ rlwrap ./shen
Shen, copyright (C) 2010-2015 Mark Tarver, Shen 20.0
running under Wasp Lisp, implementation: WaspVM
port 0.3 ported by Chris Double


Note that it takes a while to startup as it runs through the Shen and KLambda initialization.

Running from the Wasp REPL

Shen can be run and debugged from the Wasp REPL. To load the compiled code and run Shen:

$ rlwrap wasp
>> (import "driver")
>> (load-all)
>> (kl:shen.shen)
Shen, copyright (C) 2010-2015 Mark Tarver, Shen 20.0
running under Wasp Lisp, implementation: WaspVM
port 0.3 ported by Chris Double


When developing on the compiler it's useful to use eval-all instead of load-all. This will load the KLambda files, compile them to Scheme and eval them:

>> (eval-all)
>> (kl:shen.shen)

A single input line of Shen can be entered and run, returning to the Wasp REPL with:

>> ( 
(+ 1 2)
3:: 3

KLambda functions can be called from Wasp by prefixing them with kl:. For example:

>> (
(define factorial
  1 -> 1
  X -> (* X (factorial (- X 1))))
factorial:: factorial
>> (kl:factorial 10)
:: 3628800

Shen allows introspecting compiled Shen functions and examining the KLambda code. From the Wasp REPL this is useful for viewing the KLambda and comparing with the generated Wasp Lisp:

>> (kl:ps 'factorial)
:: (defun factorial (V1172) (cond (...) (...)))
>> (pretty (kl:ps 'factorial))
(defun factorial (V1172 ) (cond ((= 1 V1172 ) 1 ) (#t (* V1172 (factorial (- V1172 1 ) ) ) ) ) ) :: null
>> (pretty (kl->wasp (kl:ps 'factorial)))
(begin (register-function-arity (quote factorial ) 1 )
       (define (kl:factorial V1172)
           ((kl:= 1 V1172) 1)
           (#t (* V1172 (kl:factorial (- V1172 1))))))
       (quote factorial ) ) :: null

Cross Compilation

Wasp binaries are a small Wasp VM stub plus the compiled Lisp code appended to it. This makes building for other platforms easy as long as you have the stub for that platform. Wasp can be built for Android and static binaries via musl are possible.

I've made the following stubs available for building binaries for other systems:

Decompress them and copy into the lib/waspvm-stubs directory where Wasp Lisp was installed. Shen can then be built on any host platform for 64 bit linux, 64 bit Linux static binaries, 64 bit Windows or 64 bit Mac OS with:

$ waspc -exe shen -platform linux-x86_64
$ waspc -exe shen_static -platform static-linux-x86_64
$ waspc -exe shen.exe -platform win-x86_64
$ waspc -exe shen_macos -platform Darwin-x86_64

Learning Shen

Some places to go to learn Shen:

Other Ports

Tags: shen  waspvm 


Exploring 3-Move - A LambdaMOO inspired environment

I was a fan of MUDs from my earliest introduction to computers. I remember writing to Richard Bartle when I was young asking about the possiblity of accessing MUD1 from New Zealand after having read about it in a magazine. The reply was very positive but unfortunately the cost of 300 baud modem access at international phone rates was prohibitive. It was later in life that my first use of the internet and a shell account on my ISP was to compile and run a MUD client.

The MOO variants of MUDs are particularly interesting as they are multi user, programmable, interactive systems. They're like IRC where users can create objects, rooms and worlds by writing programs within the system. This resulted in systems with interesting programming systems with permission models for different levels of users. Content, including code, was stored in a persistent object database. LambdaMOO was a very popular instance of a MOO.

A while back I stumbled across 3-Move, a multi user networked online text-based programmable environment, by Tony Garnock-Jones. It's a neat system that includes:

  • Persistent object-oriented database
  • A MOO inspired security model
  • Prototype-based object-oriented language
  • First-class functions, continuations and preemptive green threads.

There's not much written in the way of documentation on getting it running so this post documents how I got it working. It appears to not be actively developed anymore but it's a nice small system to learn from.


Building 3-move requires cloning the source and running make:

$ git clone
$ make
$ ./move/move
move [-t] <dbfilename> [<move-source-code-file> ...]

This produces the move executable which is the virtual machine and a checkpoint-cleanup which is a helper program to clean up database checkpoint files.

The move executable requires a database as an argument. This database stores the state of the persistent world. It's loaded in memory when move is run and can be saved by occasionally checkpointing the system. Optional arguments are move source code files that are compiled and executed.

In the db directory are a number of move source files that contain the code for a multi user virtual environment. The command parser, socket usage, line editor, etc are all written in these move files.


To create an initial database there is a build script that creates a database with the content of the move file. It can be run with:

$ cd db
$ ./build

This creates the database in a file, db, and a symbolic link to the move executable in the current directory for easy execution. All the build script does is run move on files in the right order to build the database. It's equivalent to:

$ ./move db root.move && mv move.checkpoint.000 db
$ ./move db system.move && mv move.checkpoint.000 db
$ ./move db thing.move && mv move.checkpoint.000 db
$ ./move db login.move && mv move.checkpoint.000 db
$ ./move db player.move && mv move.checkpoint.000 db
$ ./move db room.move && mv move.checkpoint.000 db
$ ./move db exit.move && mv move.checkpoint.000 db
$ ./move db note.move && mv move.checkpoint.000 db
$ ./move db program.move && mv move.checkpoint.000 db
$ ./move db registry.move && mv move.checkpoint.000 db

Each run of move creates a new database called move.checkout.000, containing the state of the old database plus any changes made by the move file. This is then renamed back to db and run again on the next file. The end result is a complete database with a default virtual environment.


With the database built the system can be run with:

$ ./move db restart.move

restart.move calls start-listening() to start the socket server accepting connections:



It calls the set-realuid and set-effuid functions to Wizard before calling to ensure that the system can access the default "Wizard" user which has full permissions to call the socket related functions.

start-listening is implemented in login.move. It creates a server socket that accepts connections on port 7777. It can be connected to via telnet, netcat, or similar program:

$ nc 7777
        _/      _/    _/_/_/    _/      _/  _/_/_/_/_/  
       _/_/  _/_/  _/      _/    _/  _/    _/       
      _/  _/  _/  _/      _/    _/  _/    _/_/_/        
     _/      _/  _/      _/      _/      _/         
    _/      _/    _/_/_/        _/      _/_/_/_/_/      

3-MOVE Copyright (C) 1997--2009 Tony Garnock-Jones.
This program comes with ABSOLUTELY NO WARRANTY; for details see
This is free software, and you are welcome to redistribute it
under certain conditions; see for


The database only contains one user, Wizard, to begin with. It has no password:

login: Wizard

Logging in as player Wizard...

Welcome to MOVE.

Generic Room
This is a nondescript room.
Wizard is here.

The @verbs command can be used to find out what commands can be sent to objects:

@verbs me
  Verbs defined on Wizard (#7) and it's parents:
    (Wizard (#7))
    (Generic Player (#2))
    @setpass <pass>

@verbs here
  Verbs defined on Generic Room (#3) and it's parents:
    (Generic Room (#3))
    say <sent>
    @@shout <sent>

@examine is another useful verb for finding out internal details of an object:

@examine me
  Wizard (#7) (owned by Wizard (#7))
  Location: #<object Generic Room (#3)>
  Contents: [Registry (#0), Generic Program (#6), Generic Note (#5), Generic Exit (#4),
  Generic Thing (#1)]
  Parent(s): Generic Player (#2)
  Methods: [@checkpoint-verb, @shutdown-verb]
  Slots: [verbs, connection, is-programmer, registry-number, name, awake]
  Verbs: [@shutdown-verb, @checkpoint-verb]

It's important to set a password when first logging in:

@setpass ********
  Password changed.


A multi user environment without other users isn't much fun. Guest users can be added with:

@build Guest as guest1
  You created an object.
  You named it "guest1".
  It was registered as guest1 (#9).

These are special users in that any login name of guest will pick from the current guest users that are not logged in. This allows people to explore the system without creating a user. Specific users can also be created:

@build Player as chris
  You created an object.
  You named it "chris".
  It was registered as chris (#10).

Here's an example interaction of the chris user logging in:

@setpass foo
  Password changed.
@describe chris
  Editing description of #<object chris (#10)>.
  Type .s to save, or .q to lose changes. .? is for help.
2> .l
  --- Current text:
  1> You see a player who needs to @describe %r.
2> .d 1
1> An amorphous blob shimmers in the light.
2> .s
  Setting description...
  Description set.

look at me

  An amorphous blob shimmers in the light.
  (chris is awake.)

Creating Rooms

Wizards can create rooms and routes to them with @dig:

@dig here to Large Room as north
  You dig the backlink exit, named "out", from "Large Room" to "Generic Room (#3)".
  You dig the outward exit, named "north", from "Generic Room (#3)" to "Large Room".

  Generic Room
  This is a nondescript room.
  chris, guest1 and Wizard are here.
  Obvious exits: north to Large Room

  Large Room
  This is a nondescript room.
  Wizard is here.
  Obvious exits: out to Generic Room

Normal users can create rooms but can't dig paths to the new room inside an existing room they didn't create themselves. They can use go to to go to the room created and build up rooms from there. A friendly Wizard can link the rooms later if desired. The room logic is in room.move.


Programs can be written and executed within the environment. This is done by creating a Program object, editing it and compiling it:

@build Program as hello
  You created an object.
  You named it "hello".
  It was registered as hello (#11).

edit hello
  Type .s to save, or .q to lose changes. .? is for help.
1> realuid():tell("Hello World\n");
2> .s
  Edit successful.

@compile hello
  Hello World
  Result: undefined

This "hello world" example gets the current user with realuid and calls the tell method which sends output to that users connection.

The code for the Program object is in program.move. Note that the @compile verb wraps the code from the program inside a "function (TARGET) { ...code here... }". TARGET can be set using the @target verb on a program. This enables writing programs that can add verbs to objects. The tricks subdirectory has some example of this, for example ps.verbs.move that adds the @ps verb to the target:

define method (TARGET) @ps-verb(b) {
  define player = realuid();
  if (player != TARGET) {
    player:tell("You don't have permission to @ps, I'm sorry.\n");
  } else {
    define tab = get-thread-table();

    player:mtell(["Process table:\n"] + map(function(p) {
      " #" + get-print-string(p[0]) + "\t" +
    p[1].name + "\t\t" +
    get-print-string(p[2]) + "\t" +
    get-print-string(p[3]) + "\n";
    }, tab));
TARGET:add-verb(#this, #@ps-verb, ["@ps"]);  

If that text is copied and pasted into a program, then @ps can be added to an object with:

@build Program as psprog
  You created an object.
  You named it "psprog".
  It was registered as psprog (#13).

edit psprog
  Type .s to save, or .q to lose changes. .? is for help.
1> ...paste program code above...
2> .s

@target psprog at me
  You target psprog (#13) at Wizard (#7).

@compile psprog
  Result: true

@verbs me
  Verbs defined on Wizard (#7) and it's parents:
    (Wizard (#7))

  Process table:
   #1   Wizard      false   2
   #2   Wizard      false   0
   #3   chris       false   1


All changes to the system are done in memory. A checkpoint method should be called occasionally to save the current state of the database. An example of how to do this is in checkpoint.move but it can also be done by any Wizard calling @checkpoint.

Checkpoints don't overwrite the existing database - they save to a new file of the form move.checkpoint.000, where 000 is an incrementing number. When restarting a system it's important to use the last checkpoint to start from.

Programming Language

The programming language used by 3-Move is undocumented but it's pretty easy to follow from the examples. The primitives can be seen in the PRIM.*.c files in the move directory. Functions are of the form:

define function this-is-a-function(arg1, argn) {

The system uses a prototype object system. Objects are created by calling clone on an existing object:

// Create an object cloned from the Root object
define c = Root:clone();

Objects can have fields. These are defined as:

// Define a blah field of the new object and give it a value
define (c) blah = "foo";

// Access the blah field

Objects can have methods:

// Define a constructor for 'c' which gets called when cloned
define method (c) initialize() {
  this.blah = "new blah";

Fields are accessed using the dot operator (.) and methods with the colon operator (:). There are separate namespace for fields and methods.

Objects and fields can have flags set to control permissions. An example from the source:

// Only the owner of an object can see the connection field
define (Player) connection = null;
set-slot-flags(Player, #connection, O_OWNER_MASK);

The flags are:

define O_OWNER_MASK = 0x00000F00;
define O_GROUP_MASK = 0x000000F0;
define O_WORLD_MASK = 0x0000000F;
define O_ALL_R  = 0x00000111;
define O_ALL_W  = 0x00000222;
define O_ALL_X  = 0x00000444;

define O_OWNER_R    = 0x00000100;
define O_OWNER_W    = 0x00000200;
define O_OWNER_X    = 0x00000400;
define O_GROUP_R    = 0x00000010;
define O_GROUP_W    = 0x00000020;
define O_GROUP_X    = 0x00000040;
define O_WORLD_R    = 0x00000001;
define O_WORLD_W    = 0x00000002;
define O_WORLD_X    = 0x00000004;

From within a method it's possible to query the user that called it and from there dynamically check permissions:

define method (Thing) add-alias(n) {
  if (caller-effuid() != owner(this) && !privileged?(caller-effuid()))
    return false;

  this.aliases = this.aliases + [n];
  return true;

This ensures that add-alias can only be called on the Thing object if the caller is the owner of the object and if they are a privileged user. Another example is:

define method (Thing) add-verb(selfvar, methname, pattern) {
  define c = caller-effuid();
  define fl = object-flags(this);

  if ((fl & O_WORLD_W == O_WORLD_W) ||
      ((fl & O_GROUP_W == O_GROUP_W) && in-group-of(c, this)) ||
      ((fl & O_OWNER_W == O_OWNER_W) && c == owner(this)) ||
      privileged?(c)) {

Here add-verb can only be called if the object is world writeable, or group writeable and the caller is a member of the group, or owner writeable and the caller is the owner, or the caller is privileged.

Objects can also have flags set:

define method (Root) clone() {
  define n = the-clone(this);
  if (n) {
    set-object-flags(n, O_NORMAL_FLAGS);
set-setuid(Root:clone, false);

Anonymous functions and higher order functions are available. Unfortunately there's no REPL but snippets can be tested in Program objects when logged in, or added to a move file and executed against the database. The result will be printed as part of the output:

define function reduce(f, st, vec) {
  define i = 0;

  while (i < length(vec)) {
    st = f(st, vec[i]);
    i = i + 1;


reduce(function(acc, al) acc + al, 0, [1, 2, 3, 4, 5]);

$ ./move x reduce.move 
importing test2.move
--> true
--> 15
-->! the compiler returned NULL.

Lightweight threads are spawned using fork and fork/quota. The first version, fork takes a function to spawn in the background. It uses a default CPU quota of 15,000 cycles before it terminates:

fork(function () {
    while (true) { something...
      // sleep for one second

Threads are saved in the database when checkpointed and resumed when the database is started. fork/quota allows setting a quota value other than the default of 15,000 cycles. It also allows three special values. A quota value of 0 means the thread should exit as soon as possible. -1 means the thread should run forever, with no quota, and can be checkpointed and resumed on restart like normal threads. A value of -2 means the thread runs forever but is not checkpointed and therefore not resumed at startup.

#define VM_STATE_DYING          0
#define VM_STATE_DAEMON         -1
#define VM_STATE_NOQUOTA        -2

fork/quota(function () {
    while (true) { something...
      // sleep for one second

The language has support for first class continuations via the call/cc primitive. This works the same as the scheme call-with-current-continuation function. An example from the wikipedia page:

define function foo(ret) {

foo(function (x) x); // Returns 3
call/cc(foo);        // Returns 2


There is a tricks directory that contains utility code and examples. This includes an http server written in move, code for sending mail via smtp, and some bot examples.


The move language looks quite nice. I suspect it'd be useful for things other than virtual worlds - server side applications that can be extended with scripts safely are a common use case. I've put an example server on port 7777 to experiment with. There are a few guest accounts configured:

$ nc 7777

There's a port for SSL connections on 7778 that can be connected via:

$ openssl s_client -connect

The SSL connection was set up on the server using socat to forward to the 7777 port:

$ openssl genrsa -out move.key 2048
$ openssl req -new -key move.key -x509 -days 3653 -out move.crt
$ socat openssl-listen:7778,reuseaddr,pf=ip4,fork,cert=./move.pem,verify=0 TCP:
Tags: 3move 

This site is accessable over tor as hidden service mh7mkfvezts5j6yu.onion, or Freenet using key: