Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


2017-04-24

Shen Language Port for Wasp Lisp

This post intersects two of my favourite lispy languages. Shen is a functional programming language with a number of interesting features. These include:

  • Optional static type checking
  • Pattern matching
  • Integrated Prolog system
  • Parsing libraries

I've written about Shen Prolog before which gives a bit of a feel for the language.

Wasp Lisp is a small Scheme-like lisp with lightweight concurrency and the ability to send bytecode across the network. It's used in the MOSREF secure remote injection framework. I've written a number of posts about it.

A feature of Shen is that it is designed to run on top of a lighter weight lisp called KLambda. KLambda has only about 46 primitives, many of which already exist in lisp systems, making it possible to write compilers to other languages without too much work. There exist a few Shen ports already. I wanted to port Shen to Wasp Lisp so I can experiment with using the pattern matching, Prolog and types in some of the distributed Wasp code I use.

Wasp Lisp is not actively developed but the author Scott Dunlop monitors the github repository and processes pull requests. Shen requires features that Wasp Lisp doesn't currently support, like real numbers. I maintain a fork on github that implements the features that Shen needs and any features that apply back to core Wasp Lisp I'll upstream.

This port is heavily based on the Shen Scheme implementation. Much of the code is ported from Scheme to Wasp Lisp and the structure is kept the same. The license for code I wrote is the same as the Shen Scheme License, BSD3-Clause.

The Shen Source is written in the Shen language. Using an existing Shen implementation this source is compiled to Klambda:

$ shen-chibi
(0-) (load "make.shen")
(1-) (make)
compiling ...

To port to another language then becomes writing a KLambda interpreter or compiler. In this case it's a compiler from KLambda to Wasp Lisp. Implementing the primitives is also required but there aren't many of them. Some of the characters that KLambda uses in symbols aren't compatible with the Wasp reader so I used an S-expression parser to read the KLambda code and then walked the tree converting expressions as it went. This is written in Wasp code, converted from the original Scheme. In hindsight it probably would have been easier to write this part in Shen and bootstrap it in another Shen instance to make use of Shen's parsing and pattern matching libraries.

Shen makes heavy use of tail calls in code meaning some form of tail call optimisation is needed to be efficient. In a previous post I mentioned some places where Wasp doesn't identify tail calls. These are cases Shen hit a lot, causing performance issues. I made some changes to the optimizer to identify these cases and it improved the Shen on Wasp runtime performance quite a bit.

Current Port State

This is a very early version. I've only just got it working. The Shen tests pass with the exception of the Proof Assistant test which hangs when loading.

Note 2017-04-26: The bug with the proof assistant test not passing is now fixed. It was caused by an integer overflow when computing complexities within the Shen prolog code. Wasp integers are smaller than other Shen implementations which is why none of them hit the issue. The binaries have been updated with this fix.

The port is slower than I'd like - about half the speed of the Shen C interpreter and significantly slower than Shen Scheme and Shen on SBCL. I've done some work on optimizing tail calls in the fork of the Wasp VM for Shen but there's much more work on the entire port that could improve things.

Binaries

The following compiled binaries are available:

shen_static.bz2. This is a static 64-bit linux binary with no dependancies. It should run on any 64-bit Linux system. Decompress with:

$ bunzip2 shen_static.bz2
$ chmod +x shen_static
$ ./shen_static

shen_macos.bz2. 64-bit binary for Mac OS. Decompress with bunzip2 as above.

shen.zip. The zip file contains a Windows 64-bit binary, shen.exe. It should run on any modern 64-bit Windows system.

Building

First step, build the fork of Wasp Lisp needed to run:

$ git clone --branch shen https://github.com/doublec/WaspVM wasp-shen
$ cd wasp-shen
$ make install

Follow the prompts for the location to install the wasp lisp binaries and add that bin directory of that location to your path:

$ export PATH=$PATH:/path/to/install/bin

Shen is provided in source code format from the Shen Sources github repository. The code is written in Shen. It needs a working Shen system to compile that code to KLambda, a small Lisp subset that Shen uses as a virtual machine.

This KLamda code can be found in the kl directory in the shen-wasp repository. These KLambda files are compiled to Wasp Lisp and stored as compiled code in the compiled directory. The shen wasp repository includes a recent version of these files. To generate, or re-generate, run the following commands:

$ git clone https://github.com/doublec/shen-wasp
$ cd shen-wasp
$ rlwrap wasp
>> (import "driver")
>> (compile-all)
Compiling toplevel.kl
Compiling core.kl
Compiling sys.kl
Compiling sequent.kl
Compiling yacc.kl
Compiling reader.kl
Compiling prolog.kl
Compiling track.kl
Compiling load.kl
Compiling writer.kl
Compiling macros.kl
Compiling declarations.kl
Compiling types.kl
Compiling t-star.kl

This will create files with the Wasp Lisp code in the compiled/*.ms files, and the compiled bytecode in compiled/*.mo files.

Creating a Shen executable can be done with:

$ waspc -exe shen shen.ms
$ chmod +x shen
$ rlwrap ./shen
Shen, copyright (C) 2010-2015 Mark Tarver
www.shenlanguage.org, Shen 20.0
running under Wasp Lisp, implementation: WaspVM
port 0.3 ported by Chris Double


(0-) 

Note that it takes a while to startup as it runs through the Shen and KLambda initialization.

Running from the Wasp REPL

Shen can be run and debugged from the Wasp REPL. To load the compiled code and run Shen:

$ rlwrap wasp
>> (import "driver")
>> (load-all)
>> (kl:shen.shen)
Shen, copyright (C) 2010-2015 Mark Tarver
www.shenlanguage.org, Shen 20.0
running under Wasp Lisp, implementation: WaspVM
port 0.3 ported by Chris Double


(0-)

When developing on the compiler it's useful to use eval-all instead of load-all. This will load the KLambda files, compile them to Scheme and eval them:

>> (eval-all)
>> (kl:shen.shen)
...

A single input line of Shen can be entered and run, returning to the Wasp REPL with:

>> (kl:shen.read-evaluate-print) 
(+ 1 2)
3:: 3

KLambda functions can be called from Wasp by prefixing them with kl:. For example:

>> (kl:shen.read-evaluate-print)
(define factorial
  1 -> 1
  X -> (* X (factorial (- X 1))))
factorial:: factorial
>> (kl:factorial 10)
:: 3628800

Shen allows introspecting compiled Shen functions and examining the KLambda code. From the Wasp REPL this is useful for viewing the KLambda and comparing with the generated Wasp Lisp:

>> (kl:ps 'factorial)
:: (defun factorial (V1172) (cond (...) (...)))
>> (pretty (kl:ps 'factorial))
(defun factorial (V1172 ) (cond ((= 1 V1172 ) 1 ) (#t (* V1172 (factorial (- V1172 1 ) ) ) ) ) ) :: null
>> (pretty (kl->wasp (kl:ps 'factorial)))
(begin (register-function-arity (quote factorial ) 1 )
       (define (kl:factorial V1172)
         (cond
           ((kl:= 1 V1172) 1)
           (#t (* V1172 (kl:factorial (- V1172 1))))))
       (quote factorial ) ) :: null

Cross Compilation

Wasp binaries are a small Wasp VM stub plus the compiled Lisp code appended to it. This makes building for other platforms easy as long as you have the stub for that platform. Wasp can be built for Android and static binaries via musl are possible.

I've made the following stubs available for building binaries for other systems:

Decompress them and copy into the lib/waspvm-stubs directory where Wasp Lisp was installed. Shen can then be built on any host platform for 64 bit linux, 64 bit Linux static binaries, 64 bit Windows or 64 bit Mac OS with:

$ waspc -exe shen -platform linux-x86_64 shen.ms
$ waspc -exe shen_static -platform static-linux-x86_64 shen.ms
$ waspc -exe shen.exe -platform win-x86_64 shen.ms
$ waspc -exe shen_macos -platform Darwin-x86_64 shen.ms

Learning Shen

Some places to go to learn Shen:

Other Ports

Tags: shen  waspvm 

2017-04-09

Exploring 3-Move - A LambdaMOO inspired environment

I was a fan of MUDs from my earliest introduction to computers. I remember writing to Richard Bartle when I was young asking about the possiblity of accessing MUD1 from New Zealand after having read about it in a magazine. The reply was very positive but unfortunately the cost of 300 baud modem access at international phone rates was prohibitive. It was later in life that my first use of the internet and a shell account on my ISP was to compile and run a MUD client.

The MOO variants of MUDs are particularly interesting as they are multi user, programmable, interactive systems. They're like IRC where users can create objects, rooms and worlds by writing programs within the system. This resulted in systems with interesting programming systems with permission models for different levels of users. Content, including code, was stored in a persistent object database. LambdaMOO was a very popular instance of a MOO.

A while back I stumbled across 3-Move, a multi user networked online text-based programmable environment, by Tony Garnock-Jones. It's a neat system that includes:

  • Persistent object-oriented database
  • A MOO inspired security model
  • Prototype-based object-oriented language
  • First-class functions, continuations and preemptive green threads.

There's not much written in the way of documentation on getting it running so this post documents how I got it working. It appears to not be actively developed anymore but it's a nice small system to learn from.

Building

Building 3-move requires cloning the source and running make:

$ git clone https://github.com/tonyg/3-move
$ make
$ ./move/move
move [-t] <dbfilename> [<move-source-code-file> ...]

This produces the move executable which is the virtual machine and a checkpoint-cleanup which is a helper program to clean up database checkpoint files.

The move executable requires a database as an argument. This database stores the state of the persistent world. It's loaded in memory when move is run and can be saved by occasionally checkpointing the system. Optional arguments are move source code files that are compiled and executed.

In the db directory are a number of move source files that contain the code for a multi user virtual environment. The command parser, socket usage, line editor, etc are all written in these move files.

Database

To create an initial database there is a build script that creates a database with the content of the move file. It can be run with:

$ cd db
$ ./build

This creates the database in a file, db, and a symbolic link to the move executable in the current directory for easy execution. All the build script does is run move on files in the right order to build the database. It's equivalent to:

$ ./move db root.move && mv move.checkpoint.000 db
$ ./move db system.move && mv move.checkpoint.000 db
$ ./move db thing.move && mv move.checkpoint.000 db
$ ./move db login.move && mv move.checkpoint.000 db
$ ./move db player.move && mv move.checkpoint.000 db
$ ./move db room.move && mv move.checkpoint.000 db
$ ./move db exit.move && mv move.checkpoint.000 db
$ ./move db note.move && mv move.checkpoint.000 db
$ ./move db program.move && mv move.checkpoint.000 db
$ ./move db registry.move && mv move.checkpoint.000 db

Each run of move creates a new database called move.checkout.000, containing the state of the old database plus any changes made by the move file. This is then renamed back to db and run again on the next file. The end result is a complete database with a default virtual environment.

Running

With the database built the system can be run with:

$ ./move db restart.move

restart.move calls start-listening() to start the socket server accepting connections:

set-realuid(Wizard);
set-effuid(Wizard);

start-listening();

It calls the set-realuid and set-effuid functions to Wizard before calling to ensure that the system can access the default "Wizard" user which has full permissions to call the socket related functions.

start-listening is implemented in login.move. It creates a server socket that accepts connections on port 7777. It can be connected to via telnet, netcat, or similar program:

$ nc 127.0.0.1 7777
        _/      _/    _/_/_/    _/      _/  _/_/_/_/_/  
       _/_/  _/_/  _/      _/    _/  _/    _/       
      _/  _/  _/  _/      _/    _/  _/    _/_/_/        
     _/      _/  _/      _/      _/      _/         
    _/      _/    _/_/_/        _/      _/_/_/_/_/      

3-MOVE Copyright (C) 1997--2009 Tony Garnock-Jones.
This program comes with ABSOLUTELY NO WARRANTY; for details see
https://leastfixedpoint.com/tonyg/kcbbs/projects/3-move.html.
This is free software, and you are welcome to redistribute it
under certain conditions; see http://github.com/tonyg/3-move for
details.


login: 

The database only contains one user, Wizard, to begin with. It has no password:

login: Wizard
Password: 

Logging in as player Wizard...

Welcome to MOVE.

Generic Room
~~~~~~~~~~~~
This is a nondescript room.
Wizard is here.

The @verbs command can be used to find out what commands can be sent to objects:

@verbs me
  Verbs defined on Wizard (#7) and it's parents:
    (Wizard (#7))
    @shutdown
    @checkpoint
    (Generic Player (#2))
    look
    @setpass <pass>
     ...

@verbs here
  Verbs defined on Generic Room (#3) and it's parents:
    (Generic Room (#3))
    say <sent>
    emote<sent>
    @@shout <sent>
      ...

@examine is another useful verb for finding out internal details of an object:

@examine me
  Wizard (#7) (owned by Wizard (#7))
  Location: #<object Generic Room (#3)>
  Contents: [Registry (#0), Generic Program (#6), Generic Note (#5), Generic Exit (#4),
  Generic Thing (#1)]
  Parent(s): Generic Player (#2)
  Methods: [@checkpoint-verb, @shutdown-verb]
  Slots: [verbs, connection, is-programmer, registry-number, name, awake]
  Verbs: [@shutdown-verb, @checkpoint-verb]

It's important to set a password when first logging in:

@setpass ********
  Password changed.

Users

A multi user environment without other users isn't much fun. Guest users can be added with:

@build Guest as guest1
  You created an object.
  You named it "guest1".
  It was registered as guest1 (#9).

These are special users in that any login name of guest will pick from the current guest users that are not logged in. This allows people to explore the system without creating a user. Specific users can also be created:

@build Player as chris
  You created an object.
  You named it "chris".
  It was registered as chris (#10).

Here's an example interaction of the chris user logging in:

@setpass foo
  Password changed.
@describe chris
  Editing description of #<object chris (#10)>.
  Type .s to save, or .q to lose changes. .? is for help.
2> .l
  --- Current text:
  1> You see a player who needs to @describe %r.
2> .d 1
1> An amorphous blob shimmers in the light.
2> .s
  Setting description...
  Description set.

look at me

  chris
  ~~~~~
  An amorphous blob shimmers in the light.
  (chris is awake.)

Creating Rooms

Wizards can create rooms and routes to them with @dig:

@dig here to Large Room as north
  You dig the backlink exit, named "out", from "Large Room" to "Generic Room (#3)".
  You dig the outward exit, named "north", from "Generic Room (#3)" to "Large Room".

look
  Generic Room
  ~~~~~~~~~~~~
  This is a nondescript room.
  chris, guest1 and Wizard are here.
  Obvious exits: north to Large Room

north
  Large Room
  ~~~~~~~~~~
  This is a nondescript room.
  Wizard is here.
  Obvious exits: out to Generic Room

Normal users can create rooms but can't dig paths to the new room inside an existing room they didn't create themselves. They can use go to to go to the room created and build up rooms from there. A friendly Wizard can link the rooms later if desired. The room logic is in room.move.

Programs

Programs can be written and executed within the environment. This is done by creating a Program object, editing it and compiling it:

@build Program as hello
  You created an object.
  You named it "hello".
  It was registered as hello (#11).

edit hello
  Type .s to save, or .q to lose changes. .? is for help.
1> realuid():tell("Hello World\n");
2> .s
  Edit successful.

@compile hello
  Hello World
  Result: undefined

This "hello world" example gets the current user with realuid and calls the tell method which sends output to that users connection.

The code for the Program object is in program.move. Note that the @compile verb wraps the code from the program inside a "function (TARGET) { ...code here... }". TARGET can be set using the @target verb on a program. This enables writing programs that can add verbs to objects. The tricks subdirectory has some example of this, for example ps.verbs.move that adds the @ps verb to the target:

define method (TARGET) @ps-verb(b) {
  define player = realuid();
  if (player != TARGET) {
    player:tell("You don't have permission to @ps, I'm sorry.\n");
  } else {
    define tab = get-thread-table();

    player:mtell(["Process table:\n"] + map(function(p) {
      " #" + get-print-string(p[0]) + "\t" +
    p[1].name + "\t\t" +
    get-print-string(p[2]) + "\t" +
    get-print-string(p[3]) + "\n";
    }, tab));
  }
}
TARGET:add-verb(#this, #@ps-verb, ["@ps"]);  

If that text is copied and pasted into a program, then @ps can be added to an object with:

@build Program as psprog
  You created an object.
  You named it "psprog".
  It was registered as psprog (#13).

edit psprog
  Type .s to save, or .q to lose changes. .? is for help.
1> ...paste program code above...
2> .s

@target psprog at me
  You target psprog (#13) at Wizard (#7).

@compile psprog
  Result: true

@verbs me
  Verbs defined on Wizard (#7) and it's parents:
    (Wizard (#7))
    @shutdown
    @checkpoint
    @ps
    ...

@ps
  Process table:
   #1   Wizard      false   2
   #2   Wizard      false   0
   #3   chris       false   1

Checkpointing

All changes to the system are done in memory. A checkpoint method should be called occasionally to save the current state of the database. An example of how to do this is in checkpoint.move but it can also be done by any Wizard calling @checkpoint.

Checkpoints don't overwrite the existing database - they save to a new file of the form move.checkpoint.000, where 000 is an incrementing number. When restarting a system it's important to use the last checkpoint to start from.

Programming Language

The programming language used by 3-Move is undocumented but it's pretty easy to follow from the examples. The primitives can be seen in the PRIM.*.c files in the move directory. Functions are of the form:

define function this-is-a-function(arg1, argn) {
   ...
}

The system uses a prototype object system. Objects are created by calling clone on an existing object:

// Create an object cloned from the Root object
define c = Root:clone();

Objects can have fields. These are defined as:

// Define a blah field of the new object and give it a value
define (c) blah = "foo";

// Access the blah field
c.blah;

Objects can have methods:

// Define a constructor for 'c' which gets called when cloned
define method (c) initialize() {
  as(Root):initialize();
  this.blah = "new blah";
}

Fields are accessed using the dot operator (.) and methods with the colon operator (:). There are separate namespace for fields and methods.

Objects and fields can have flags set to control permissions. An example from the source:

// Only the owner of an object can see the connection field
define (Player) connection = null;
set-slot-flags(Player, #connection, O_OWNER_MASK);

The flags are:

define O_OWNER_MASK = 0x00000F00;
define O_GROUP_MASK = 0x000000F0;
define O_WORLD_MASK = 0x0000000F;
define O_ALL_R  = 0x00000111;
define O_ALL_W  = 0x00000222;
define O_ALL_X  = 0x00000444;

define O_OWNER_R    = 0x00000100;
define O_OWNER_W    = 0x00000200;
define O_OWNER_X    = 0x00000400;
define O_GROUP_R    = 0x00000010;
define O_GROUP_W    = 0x00000020;
define O_GROUP_X    = 0x00000040;
define O_WORLD_R    = 0x00000001;
define O_WORLD_W    = 0x00000002;
define O_WORLD_X    = 0x00000004;

From within a method it's possible to query the user that called it and from there dynamically check permissions:

define method (Thing) add-alias(n) {
  if (caller-effuid() != owner(this) && !privileged?(caller-effuid()))
    return false;

  this.aliases = this.aliases + [n];
  return true;
}

This ensures that add-alias can only be called on the Thing object if the caller is the owner of the object and if they are a privileged user. Another example is:

define method (Thing) add-verb(selfvar, methname, pattern) {
  define c = caller-effuid();
  define fl = object-flags(this);

  if ((fl & O_WORLD_W == O_WORLD_W) ||
      ((fl & O_GROUP_W == O_GROUP_W) && in-group-of(c, this)) ||
      ((fl & O_OWNER_W == O_OWNER_W) && c == owner(this)) ||
      privileged?(c)) {
    ...
  }
}

Here add-verb can only be called if the object is world writeable, or group writeable and the caller is a member of the group, or owner writeable and the caller is the owner, or the caller is privileged.

Objects can also have flags set:

define method (Root) clone() {
  define n = the-clone(this);
  if (n) {
    set-object-flags(n, O_NORMAL_FLAGS);
    n:initialize();
  }
  n;
}
set-setuid(Root:clone, false);

Anonymous functions and higher order functions are available. Unfortunately there's no REPL but snippets can be tested in Program objects when logged in, or added to a move file and executed against the database. The result will be printed as part of the output:

define function reduce(f, st, vec) {
  define i = 0;

  while (i < length(vec)) {
    st = f(st, vec[i]);
    i = i + 1;
  }

  st;
}

reduce(function(acc, al) acc + al, 0, [1, 2, 3, 4, 5]);

$ ./move x reduce.move 
importing test2.move
--> true
--> 15
-->! the compiler returned NULL.

Lightweight threads are spawned using fork and fork/quota. The first version, fork takes a function to spawn in the background. It uses a default CPU quota of 15,000 cycles before it terminates:

fork(function () {
    while (true) {
      ...do something...
      // sleep for one second
      sleep(1);
    }
  });

Threads are saved in the database when checkpointed and resumed when the database is started. fork/quota allows setting a quota value other than the default of 15,000 cycles. It also allows three special values. A quota value of 0 means the thread should exit as soon as possible. -1 means the thread should run forever, with no quota, and can be checkpointed and resumed on restart like normal threads. A value of -2 means the thread runs forever but is not checkpointed and therefore not resumed at startup.

#define VM_STATE_DYING          0
#define VM_STATE_DAEMON         -1
#define VM_STATE_NOQUOTA        -2

fork/quota(function () {
    while (true) {
      ...do something...
      // sleep for one second
      sleep(1);
    }
  }, VM_STATE_DAEMON);

The language has support for first class continuations via the call/cc primitive. This works the same as the scheme call-with-current-continuation function. An example from the wikipedia page:

define function foo(ret) {
  ret(2);
  3;
}

foo(function (x) x); // Returns 3
call/cc(foo);        // Returns 2

Tricks

There is a tricks directory that contains utility code and examples. This includes an http server written in move, code for sending mail via smtp, and some bot examples.

Conclusion

The move language looks quite nice. I suspect it'd be useful for things other than virtual worlds - server side applications that can be extended with scripts safely are a common use case. I've put an example server on bluishcoder.co.nz port 7777 to experiment with. There are a few guest accounts configured:

$ nc bluishcoder.co.nz 7777

There's a port for SSL connections on 7778 that can be connected via:

$ openssl s_client -connect bluishcoder.co.nz:7778

The SSL connection was set up on the server using socat to forward to the 7777 port:

$ openssl genrsa -out move.key 2048
$ openssl req -new -key move.key -x509 -days 3653 -out move.crt
$ socat openssl-listen:7778,reuseaddr,pf=ip4,fork,cert=./move.pem,verify=0 TCP:127.0.0.1:7777
Tags: 3move 

2017-03-28

Introduction to the Freenet API

Freenet is an anonymous, secure, distributed datastore. I've written about using Freenet before, including using it as the backend for static websites. In this post I'll demonstrate how to use the Freenet API to push data into the Freenet network and retrieve data from it.

Unfortunately the freenet protocol documentation is in a state of flux as it moves from a previous wiki to a github based wiki. This means some of the protocol information may be incomplete. The old wiki data is stored in Freenet under the following keys:

  • CHK@San0hXZSyCEvvb7enNUIWrkiv8MDChn8peLJllnWt4s,MCNn4eUbl5NW9quOm4JTU~0rsWu6QlIdek9VtpFpXe4,AAMC--8/freenetproject-oldwiki.tar
  • CHK@4sff2MgvexbsfgSuqNOwqVDkP~GaZPsZ1rJVKUg87g8,unU3TZ93pYGYPCoH7LC53dlc5~Rmar8SKF9fsZnQX-8,AAMC--8/freenetproject-wiki.tar

The API for external programs to communicate with a running Freenet daemon is FCPv2. It's a text based protocol accessed using a TCP socket connection on port 9481. The FCP protocol can be enabled or disabled from the Freenet configuration settings but it is enabled by default so the examples here should work in a default installation.

The basic protocol of FCP uses a unit called a 'message'. Messages are sent over the socket starting with a line for the start of the message, followed by a series of 'Key=Value' lines, and ending the message with 'EndMessage'. Some messages containing binary data and these end differently and I'll discuss this after some basic examples.

For the examples that follow I'll be using bash. I debated picking from my usual toolkit of obscure languages but decided to use something that doesn't require installation on Linux and Mac OS X and may also run on Windows 10. The examples should be readable enough for non-bash users to pick up and translate to their favourite language, especially given the simplicity of the protocol. I've found the ability to throw together a quick bash script to do inserts and retrievals to be useful.

Hello

The FCPv2 documentation lists the messages that can be sent from the client to the Freenet node, and what can be expected to be received from the node to the client. On first connecting to the node the client must send a ClientHello message. This looks like:

ClientHello
Name=My Client Name
ExpectedVersion=2.0
EndMessage

The Name field uniquely identifies the client to the node. Disconnecting and reconnecting with the same Name retains access to a persistent queue of data being inserted and retrieved. An error is returned if an attempt to connect is made when a client with the same Name is already connected.

The node returns with a NodeHello Message. This looks like:

NodeHello
Build=1477
ConnectionIdentifier=...
...
EndMessage

The various fields are described in the NodeHello documentation. This interaction can be tested using netcat or telnet:

$ nc localhost 9481
ClientHello
Name=My Client Name
ExpectedVersion=2.0
EndMessage

NodeHello
CompressionCodecs=4 - GZIP(0), BZIP2(1), LZMA(2), LZMA_NEW(3)
Revision=build01477
Testnet=false
...
ExtRevision=v29
EndMessage

You can connect to a socket from bash using 'exec' and file redirection to a pseudo-path describing the tcp socket. See HACKTUX notes from the trenches for details. The above netcat interaction looks like this from bash:

#! /bin/bash
    function wait_for {
      local line
      local str=$1
      while read -r line
      do
        >&2 echo "$line"
        if [ "$line" == "$str" ]; then
          break
        fi
      done
    }
       
    exec 3<>/dev/tcp/127.0.0.1/9481
       
    cat >&3 <<HERE
    ClientHello
    Name=My Client Name
    ExpectedVersion=2.0
    EndMessage
    HERE
       
    wait_for "NodeHello" <&3
    wait_for "EndMessage" <&3
       
    exec 3<&-
    exec 3>&-

The exec line opens a socket on port 9481, the FCP port, and assigns it to file descriptor '3'. Then we use cat to write the ClientHello message to that file descriptor. wait_for reads lines from the socket, displaying them on standard error (file descriptor '2'), until it reaches a specifc line passed as an argument. Here we wait for the NodeHello line and then the EndMesage line to cover the NodeHello response from the server. The remaining two exec lines close the socket.

The full bash script is available in hello.sh.

Retrieving data inline

The FCP message ClientGet is used to retrieve data stored at a specific key. The data can be returned inline within a message or written to a file accessable by the node. An example message for retrieving a known key is:

ClientGet
URI=CHK@otFYYKhLKFzkAKhEHWPzVAbzK9F3BRxLwuoLwkzefqA,AKn6KQE7c~8G5dLa4TuyfG16XIUwycWuFurNJYjbXu0,AAMC--8/example.txt
Identifier=1234
Verbosity=0
ReturnType=direct
EndMessage

This retrieves the contents of a particular CHK key where I stored example.txt. The Verbosity is set to not return any progress messages, just send messages when the entire contents are retrieved. A ReturnType of direct means return the data within the AllData message which is received when the retrieval is complete. The result messages are:

DataFound
Identifier=1234
CompletionTime=1490614072644
StartupTime=1490614072634
DataLength=21
Global=false
Metadata.ContentType=text/plain
EndMessage

AllData
Identifier=1234
CompletionTime=1490614072644
StartupTime=1490614072634
DataLength=21
Global=false
Metadata.ContentType=text/plain
Data
Hello Freenet World!

The first message received is DataFound giving information about the completed request. The following message, AllData, returns the actual data. Note that it does not include an EndMessage. Instead it has a Data terminator followed by the data as a sequence of bytes of length DataLength.

To process AllData from bash I use a function to extract the DataLength when it finds it:

    function get_data_length {
      local line
      while read -r line
      do
       if [[ "$line" =~ ^DataLength=.* ]]; then
         echo "${line##DataLength=}"
         break
       fi
      done
    }

This is called from the script after the ClientHello and NodeHello exchange:

    cat >&3 <<HERE
    ClientGet
    URI=CHK@otFYYKhLKFzkAKhEHWPzVAbzK9F3BRxLwuoLwkzefqA,AKn6KQE7c~8G5dLa4TuyfG16XIUwycWuFurNJYjbXu0,AAMC--8/example.txt
    Identifier=1234
    Verbosity=0
    ReturnType=direct
    EndMessage
    HERE
        
    wait_for "AllData" <&3
    len=$(get_data_length <&3)
    wait_for "Data" <&3
    dd status=none bs="$len" count=1 <&3 >&2

The dd command reads the specified number of bytes from the socket and outputs it to standard output. This is the contents of the key we requested:

$ ./getinline
Hello Freenet World!

The full bash script is available in getinline.sh.

The main downside of using inline data requests is that large files can exhaust the memory of the node.

Request Direct Disk Access

A variant of ClientGet requests the node to write the result to a file on disk instead of sending it as part of the AllData message. This is useful for large files that don't fit in memory. The data is written to the filesystem that the node has access to so it's most useful when the FCP client and the freenet node are on the same system.

Being able to tell the server to write directly to the filesystem is a security issue so Freenet requires a negotiation to happen first to confirm that the client has access to the directory that you are requesting the server to write to. This negotiation requirement, known as TestDDA can be disabled in the configuration settings of the node but it's not recommended.

First the client must send a TestDDARequest message listing the directory it wants access to and whether read or write access is being requested.

TestDDARequest
Directory=/tmp/
WantWriteDirectory=true
WantReadDirectory=true
EndMessage

The server replies with a TestDDAReply:

TestDDAReply
Directory=/tmp/
ReadFilename=/tmp/testr.tmp
WriteFilename=/tmp/testw.tmp
ContentToWrite=RANDOM
EndMessage

The script should now write the data contained in the ContentToWrite key into the file referenced by the WriteFilename key. It should read the data from the file referenced in the ReadFilename key and send that data in a TestDDAResponse:

TestDDAResponse
Directory=/tmp/
ReadContent=...content from TestDDAReply...
EndMessage

The server then responds with a TestDDAComplete:

TestDDAComplete
Directory=/tmp/
ReadDirectoryAllowed=true
WriteDirectoryAllowed=true
EndMessage

Once that dance is complete then put and get requests can be done to that specific directory. The bash code for doing this is:

    cat >&3 <<HERE
    TestDDARequest
    Directory=/tmp/
    WantWriteDirectory=true
    WantReadDirectory=true
    EndMessage
    HERE
    
    wait_for "TestDDAReply" <&3
    content=$(process_dda_reply <&3)
    
    cat >&3 <<HERE
    TestDDAResponse
    Directory=/tmp/
    ReadContent=$content
    EndMessage
    HERE
    
    wait_for "TestDDAComplete" <&3
    process_dda_complete <&3

It uses a helper function process_dda_reply to handle the TestDDAReply message from the server:

    function process_dda_reply {
      local readfile=""
      local writefile=""
      local content=""
    
      while read -r line
      do
       if [[ "$line" =~ ^ReadFilename=.* ]]; then
         readfile="${line##ReadFilename=}"
       fi
       if [[ "$line" =~ ^WriteFilename=.* ]]; then
         writefile="${line##WriteFilename=}"
       fi
       if [[ "$line" =~ ^ContentToWrite=.* ]]; then
         content="${line##ContentToWrite=}"
       fi
       if [[ "$line" == "EndMessage" ]]; then
         echo -n "$content" >"$writefile"
         cat "$readfile"
         break
       fi
      done
    }

This function reads the fields of the TestDDAReply and writes the required content to the write file and returns the data contained in the read file. This returned data is sent in the TestDDAResponse. The process_dda_complete function checks the TestDDAComplete fields to ensure that access was granted. The full bash script is available in testdda.sh.

Retrieving data to disk

The ReturnType field of the ClientGet message can be set to disk to write directly to a disk file once the TestDDA process is complete. The message looks like this:

    cat >&3 <<HERE
    ClientGet
    URI=CHK@HH-OJMEBuwYC048-Ljph0fh11oOprLFbtB7QDi~4MWw,B~~NJn~XrJIYEOMPLw69Lc5Bv6BcGWoqJbEXrfX~VCo,AAMC--8/pitcairn_justice.jpg
    Identifier=1234
    Verbosity=1
    ReturnType=disk
    Filename=/tmp/pitcairn_justice.png
    EndMessage
    HERE

In this case we're retreving a file I've inserted previously. The Verbosity key is set to 1 to enable SimpleProgress messages to be received. These messages contain fields showing the total number of blocks that can be fetched for that file, the required number of blocks that we need to get, how many we've successfully retrieved so far, and a few other fields. The following bash function processes this and prints the result:

    function handle_progress {
      local total=0
      local succeeded=0
      local required=0
      local final=""
    
      while read -r line
      do
       if [[ "$line" =~ ^Total=.* ]]; then
         total="${line##Total=}"
       fi
       if [[ "$line" =~ ^Required=.* ]]; then
         required="${line##Required=}"
       fi
       if [[ "$line" == "FinalizedTotal=true" ]]; then
         final="final"
       fi
       if [[ "$line" =~ ^Succeeded=.* ]]; then
         succeeded="${line##Succeeded=}"
       fi
       if [[ "$line" == "EndMessage" ]]; then
         echo "Progress: retrieved $succeeded out of $required required and $total total ($final)"
         break
       fi
      done
    }

The FinalizedTotal field indicates if the Total field is accurate and will not change. Otherwise that field can increase as more knowledge about the file is gained. The Required field is the number of blocks that need to be received to reconstruct the file. It is less than Total due to redundancy in the way freenet stores data to account for nodes going away and data being lost.

The handle_progress function is called from within wait_with_progress, which waits for a particular message (usually the one indicating the end of the transfer), displays progress, and ignores everything else.

    function wait_with_progress {
      while read -r line
      do
        if [ "$line" == "SimpleProgress" ]; then
          handle_progress
        fi
        if [ "$line" == "$1" ]; then
          break
        fi
      done
    }

These are called as follows:

    cat >&3 <<HERE
    ClientGet
    URI=CHK@HH-OJMEBuwYC048-Ljph0fh11oOprLFbtB7QDi~4MWw,B~~NJn~XrJIYEOMPLw69Lc5Bv6BcGWoqJbEXrfX~VCo,AAMC--8/pitcairn_justice.jpg
    Identifier=1234
    Verbosity=1
    ReturnType=disk
    Filename=/tmp/pitcairn_justice.png
    EndMessage
    HERE
    
    wait_with_progress "DataFound" <&3
    wait_for "EndMessage" <&3

The DataFound message is sent by the server when the file has been successfully retrieved. It can be found at the location specified in the Filename field of the ClientGet.

The full bash script is available in getdisk.sh.

$ bash getdisk.sh
Progress: retrieved 0 out of 1 required and 1 total ()
Progress: retrieved 1 out of 1 required and 1 total ()
Progress: retrieved 1 out of 5 required and 10 total ()
Progress: retrieved 1 out of 5 required and 10 total (final)
Progress: retrieved 5 out of 5 required and 10 total (final)

Inserting Data Inline

When storing data using FCP you can provide the data directly in the message or reference a file on disk that the node will read and store. They are both done using the ClientPut message. Sending this message looks like:

    file="$1"
    size=$(stat -c%s "$file")
    mime=$(file --mime-type "$file" |awk '{print $2}')
    
    cat >&3 <<HERE
    ClientPut
    URI=CHK@
    Metadata.ContentType=$mime
    Identifier=1234
    Verbosity=1
    GetCHKOnly=false
    TargetFilename=$(basename "$file")
    DataLength=$size
    UploadFrom=direct
    Data
    HERE
    
    dd status=none if="$file" bs="$size" count=1 |pv -L 500k >&3
    
    wait_with_progress "PutSuccessful" <&3
    uri=$(get_uri <&3)
    wait_for "EndMessage" <&3

ClientPut requires the mime type of the file and this is obtained using file. The size of the file is retrieved with stat. These are placed in the ClientPut message directly. The binary data for the file needs to be sent after a Data terminator similar to how we retrieved the data when doing an inline get. dd is again used for this but it's piped to pv to limit the data transfer rate otherwise the network gets swamped due to buffer bloat.

The URI for inserting is generated as a CHK key. This is a key based on a hash of the file content. Inserting the same file will result in the same key. get_uri reads the PutSuccessful message to find the full key of the insert. This is displayed to the user later in the script.

The full bash script is available in putinline.sh.

$ bash putinline.sh /tmp/example.txt 
Progress: inserted 0 out of 1 ()
Progress: inserted 0 out of 2 ()
Progress: inserted 0 out of 2 (final)

Inserting Data from Disk

Inserting data directly from a disk file works very similar to requesting from a disk file. It requires the TestDDA process followed by a ClientPut using a UploadFrom set to disk:

    cat >&3 <<HERE
    ClientPut
    URI=CHK@
    Metadata.ContentType=$mime
    Identifier=1234
    Verbosity=1
    GetCHKOnly=false
    TargetFilename=$(basename "$file")
    Filename=$file
    UploadFrom=disk
    EndMessage
    HERE
    
    wait_with_progress "PutSuccessful" <&3
    uri=$(get_uri <&3)
    wait_for "EndMessage" <&3

The full bash script is available in putdisk.sh.

Other Messages

There are other interesting messages that are useful. Using ClientPut if you set the field GetCHKOnly to true then the file isn't inserted but the CHK key is generated. Since CHK is based on the file contents it will be the same key if the file is inserted using the same mime type, filename and other parameters. This allows generating a key, sending it to a third party and they can start a file retrieval before the file completes inserting. There are security issues with this in that if an attacker knows the key while it is being inserted they may be able to narrow down the location of the inserting node.

Another useful message is GenerateSSK:

GenerateSSK
Identifier=1234
EndMessage

This results in a SSKKeypair reply containing a randomly generated SSK insert and request key:

SSKKeypair
InsertURI=SSK@AK.../
RequestURI=SSK@Bn.../
Identifier=1234
EndMessage

These can be used to insert data with ClientPut by setting the URI to the InsertURI, and retrieved by a third party using the RequestURI as the URI in ClientGet.

ClientPutDiskDir inserts an entire directory. This is the basis of inserting 'Freesites' - Freenet websites. I wrote a mirror.sh utility that mirrors an online website or page and inserts it into Freenet. This is useful for linking to news articles in Freenet without having to leave the Freenet environment. It uses a putdir.sh script that inserts a directory.

Conclusion

The Freenet API has a lot of functionality beyond what I've shown here. I used bash for the examples because it has few dependancies but more robust scripts would be easier in a full featured programming language. I'm not a bash expert and welcome corrections and additions. I've put the code in an fcp-examples github repository.

There are some client libraries with pyFreenet being an example. I recommend the following articles for a deeper dive into Freenet programming:

Tags: freenet 

2017-03-20

Relational Programming in Mozart/Oz

A video I watched recently on logic programming, A Vision for Relational Programming in miniKanren, by William Byrd gives some interesting examples of relational programming using miniKanren.

miniKanren is an embedded constraint logic programming language designed for writing programs as relations, rather than as functions or procedures. Unlike a function, a miniKanren relation makes no distinction between its inputs and outputs, leading to a variety of fascinating behaviors. For example, an interpreter written as a relation can also perform code synthesis, symbolic execution, and many other tasks.

The video demonstrates how a relational interpreter can be used not just to evaluate expressions but to also generate valid expressions given a result. Other examples of relational programming would be the Prolog examples in my Shen Prolog post where relational functions like member can be used to enumerate all members of a list, as well as the traditional finding a member in a list.

I like to use the Mozart Programming System for exploring logic programming and this post goes through converting some examples in Mozart/Oz. Mozart is an implementation of the Oz programming language. The book "Concepts, Techniques, and Models of Computer Programming" is a good textbook for learning the different programming models that Mozart enables, including the relational model. For the examples here I'm using Mozart 1.3.x. It's an old language and implementation but I like to experiment with languages that are a bit different to mainstream languages.

The Prolog examples following can be tested in the online SWI Prolog implementation, Swish.

Basic example

The following function in Prolog has multiple values that be used as an argument:

foo(one).
foo(two).
foo(three).

Passing either one, two or three succeeds:

foo(one).
true

foo(two).
true

foo(four).
false

Using backtracking it's possible to enumerate all valid arguments:

foo(X).
X = one
X = two
X = three

The function findall can be used to return all values in a list:

findall(X, foo(X), Y).
Y = [one, two, three]

Backtracking in Mozart/Oz is not the default. Mozart provides the choice statement to enable backtracking. A choice statement contains a sequence of clauses separated by [] where a 'choice point' is created for each group of clauses. Each clause is tried in turn - if a particular clause fails then execution backtracks to a previous choice point and is resumed until one succeeds or all fails. The equivalent implementation of foo in Mozart is:

fun {Foo}
   choice
      one
   []
      two
   []
      three
   end
end

Prolog uses an automatic depth first search to find solutions. Mozart doesn't do automatic search - programs involving choice points must be run in a search engine that can implement any form of search required. A default search engine is provided that does depth first search. Here we create a search object and enumerate all the valid results manually:

Y = {New Search.object script(Foo)}
{Browse {Y next($)}
[one]

{Browse {Y next($)}
[two]

{Browse {Y next($)}
[three]

{Browse {Y next($)}
nil

The Mozart syntax can look a little strange at first but it's not too difficult to understand once it's learnt. New instantiates an object. Search.object is the class of the object being created. script(Foo) is a record that is being passed to the constructor of Search.object. The Foo is the function we created previously. The remaining statements call the next method on the object. next takes as an argument a variable that receives the result of the call. The use of $ tells Mozart to pass a temporary variable to receive the result, and return that result as the result of the call. This result is passed to Browse which displays the result in the GUI browser. Results are returned in single element lists and when there are no more results, nil is returned.

There is a library function to return all possible solutions in a list, Search.all:

{Browse {Search.all Foo 1 _ $}}
[one two three]

The ability to interactively drive the search enables writing programs that control the search process. The Explorer is a Mozart tool that uses this ability to show an interactive graph of the search space of a program. It can be run to show a graph of all solutions in Foo with:

{Explorer.all Foo}

Membership testing

In Prolog a function to test membership of a list can be written as:

mem(X, [X|_]).
mem(X, [_|Y]) :- mem(X, Y).

This states that X is a member of the list if X is the head of the list (the first element), or if it is a member of the tail of the list.

mem(2, [1,2,3]).
true.

Thanks to Prolog's backtracking you can also use this function to enumerate all values in the list:

mem(X, [1,2,3]).
1
2
3

In Swish you need to click 'Next' to get each result. You can return all results as a list with findall:

findall(X, mem(X, [1,2,3]), Y).
Y = [1, 2, 3].

This is what the non-backtrackable member function looks like in Mozart:

fun {Mem X Ys}
   case Ys
   of nil then false
   [] Y|Yr then X==Y orelse {Mem X Yr}
   end
end

{Browse {Mem 1 [1 2 3]}}
true

A backtrackable version of this code uses choice instead of case to create choice points in each clause:

proc {Mem X Ys}
   choice
      Ys = X|_
   []
      Yr
   in
      Ys = _|Yr
      {Mem X Yr}
   end
end

Here proc is used instead of fun. A procedure doesn't return a value. It is expected to bind values to arguments to return a result. In this case either X or Ys will be bound depending on what is passed as an argument. Given the call Mem 1 [1 2 3] then the first clause of choice succeeds - the head of the list is equal to X. With the call Mem X [1 2 3] then X will be successively bound to each element of the list depending on what order the search engine used to evaluate it uses. Mem 1 Y will enumerate all possible lists containing 1:

{Browse {Search.all proc {$ L} {Mem 1 [1 2 3]} end 1 _ $}}
[_]

{Browse {Search.all proc {$ L} {Mem L [1 2 3]} end 1 _ $}}
[1 2 3]

Y={New Search.object script(proc {$ L} {Mem 1 L} end)}
{Browse {Y next($)}}
[1|_]

{Browse {Y next($)}}
[_|1|_]

{Browse {Y next($)}}
[_|_|1|_]

The difference here compared to our search over Foo is that an anonymous procedure is passed to the search engine. Foo was aleady a single argument procedure so it didn't require a wrapper. The anonymous procedure takes a single argument which is expected to be bound to the result of a single iteration. In the first example no result is bound, just the fact that the Mem call succeeds is enough. In the second, L is bound to the first argument to Mem resulting in a list of all valid first arguments. In the third, L is bound to the second argument to Mem resulting in a list of all valid lists that contain the element 1. This is infinite so we only iterate the first few solutions.

Syntax checker

The miniKanren video referenced earlier, A Vision for Relational Programming in miniKanren, has an example of a simple language syntax checker implemented in the relational style. The equivalent Mozart implementation is:

proc {LCSyn Term}
   choice
      {IsAtom Term true}
   []
      X T
   in
      Term = lambda(X T)
      {IsAtom X true}
      {LCSyn T}
   []
      E1 E2
   in
      Term = [E1 E2]
      {LCSyn E1}
      {LCSyn E2}
   end
end

A Term is either an atom, a lambda record containing an argument and body, or application of two expressions (here represented as a list). A Term can be tested to see if it is valid with:

{Browse {Search.one.depth
         proc {$ L}
            {LCSyn lambda(foo bar)}
         end
         1 _ $}}

A result of [_] indicates that it succeded (nil would be a failure). Thanks to the magic of relational programming it's possible to enumerate all valid terms:

Y={New Search.object script(LCSyn)}
{Browse {Y next($)}}
[lambda(_ _)]

{Browse {Y next($)}}
[[_ _]]

{Browse {Y next($)}}
[[lambda(_ _) _]]

{Browse {Y next($)}}
[[[_ _] _]]

Reversible Factorial

As a final example, the following program implements a factorial function that can compute the factorial of a number, or the number given its factorial:

proc {Fact ?N ?F}
   proc {Fact1 ?N ?F}
      choice
         N = 0
         F = 1
      []
         N1 F1
      in
         N1::0#FD.sup
         F1::0#FD.sup
         N >: 0
         N1 =: N - 1
         F =: N * F1
         {Fact1 N1 F1}
      end
   end
in
   N::0#FD.sup
   F::0#FD.sup      
   {Fact1 N F}
   {FD.distribute naive [N F]}
end

% Factorial of 5
{Browse {Search.all proc {$ L} {Fact 5 L} end 1 _ $}}
[120]

% What factorial gives the answer 24
{Browse {Search.all proc {$ L} {Fact L 24} end 1 _ $}}
[4]

This uses a few more Mozart features but is essentially a search through the choice points in a choice statement. In this case it also uses Finate Domain Constraints. This allows telling Mozart what the range of a particular integer value can be, provides constraints upon that value and the search process will attempt to find a solution that results in the integer containing a single value. The syntax X::0#5 constrains the variable X to be from zero to five. FD.sup is a constant for an implementation defined maximum upper bound. Operators ending in : impose constraints on values that can be backtracked during search. There's a limitation to this approach in that the finite domain constraint has an implementation defined maximum upper value and the search will fail if it hits this value which unfortunately limits the practicality of a reverse factorial.

Mozart/Oz summary

Mozart/Oz has a lot of other interesting functionality. Distributed objects, GUI programming, Concurrency, amongst others. It's a fun tool to experiment with and Concepts, Techniques, and Models of Computer Programming is an excellent read.

Unfortunately development of Mozart has slowed from its heyday. A reimplementation of Mozart, called Mozart 2 is being worked on that uses LLVM for the code generator and replaces the internal constraint programming system with the Gecode toolkit. Development seems to have stalled recently and it lacks many of the features that already exist in older Mozart versions. Hopefully it'll pick up steam again.

For this reason I continue to use Mozart 1.3.x. There is a 1.4.x version but it has some issues that make me avoid using it. The distributed layer was replaced with an implementation written in C++ which has some bugs that I've been unable to work around in projects where I used it. The distribution panel is broken due to the hooks it requires not being implemented by the new C++ layer. At some point Mozart 2 may be complete enough for me to move to that version. I make the occasional fixes to 1.3.x and 1.4.x to keep it building and running on Linux.

Even though Mozart seems to be in a holding pattern it's a great for exploring ideas and the list of papers is a good resource for learning about distributed programming, constraints and concurrency. Some interesting projects implemented in Mozart include:

Tags: mozartoz 

2017-02-22

Borrowing Internal Pointers in ATS

Recently Robert O'Callahan wrote a post on what Rust can do that other languages can't. It involves getting a pointer to an internal field of a stack allocated object and having the type system ensure that the lifetime of that pointer doesn't outlive the owning object.

As soon as I saw this I thought "Aha, ATS can do that!" but I suspected it would be quite a bit more verbose. It turns out it's possible but tedious to do and while ATS 1 can do it in safe code, ATS 2 requires some help. This isn't a "Robert's post is wrong" post or a tear down of Rust, it's a thought experiment in how to do things in ATS that are easier in Rust. Plus ATS is in no way a "remotely mainstream language".

The following is a representation of the scenario that works in ATS 1. First we declare the type of an object that contains two fields:

viewtypedef X = @{
  a= int,
  b= int
}

A function that safely returns a pointer to one of the fields looks like this:

fun get_b_ref {l:addr} (pf: X @ l | x: ptr l):
              [l2:addr] (int @ l2, int @ l2 -<lin> X @ l | ptr l2) =
let
  val p = &(x->b)
  prval pf = view@ (x->b)
in
  #[.. | (pf, llam pf => let
          prval () = view@ (x->b) := pf
        in
          view@ (!x)
        end
    | p)]
end

This can look imposing if you're not familiar with ATS. What does this all mean? Once you're familiar with ATS you can read this as get_b_ref takes a pointer to an X and a proof that you can use it. It consumes this proof meaning we can't access the X object after calling. It returns a pointer to an int - one of the internal fields of X - and provides a proof to use it as an int. It also returns a proof function that consumes that proof of an int and returns the original proof that there is an X at l - allowing you to use the original x as an X again and preventing using the returned pointer to an int due to the proof allowing you to use that getting consumed.

In more detail, it defines a function, get_b_ref, that takes a pointer to an X object as an argument: (pf: X @ l | x: ptr l). The items to the left of the | are proof arguments and only exist at type checking time. This says that pf contains a proof that an object of type X exists at address l. The l is defined in the {l:addr} section of the function declaration meaning l is a memory address. The items to the right of the | are normal values. In this case, x is a pointer with address l. Because we have a pointer at address l and a proof that an object of type X exists at l we can treat the pointer safely as being a pointer to an X. The function consumes the proof when called. That means that the proof cannot be used again so we can no longer treat x as a pointer to an X. It's a pointer to something we can't do anything with after calling.

The result of the function comes after the : symbol. The (int @ l2, int @ l2 -<lin> X @ l | ptr l2) is the meat of it. Again, proofs are to the left of the | and normal values to the right. In this case the right is ptr l2 which means a pointer of some address, l2, is returned. The intent of the function is to return an internal pointer to one of the int fields so we'd expect to have it return a proof that l2 points to an int. The proof returned is a tuple: (int @ l2, int @ l2 -<lin> X @ l).

The first field of that tuple is int @ l2 which is the proof we expected. So a caller of this function will have a pointer to an int and the proof to use it as such. The second field of the tuple is int @ l2 -<lin> X @ l. This is the syntax for declaring an anonymous function type. The items to the left of the -<...> are the arguments to the function and the items to the right are the return value. The second field of the tuple is therefore a function that takes a proof that an int exists at address l2 and returns a proof that an X exists at l. The lin means this function is linear - it can only be used once. Because this tuple appears in the proof section of the result, this anonymous function is a proof function - it is called at type checking time.

The implementation of the function gets a pointer to the b field of the object and a proof that we can use it:

val p = &(x->b)
prval pf = view@ (x->b)

The prval line is proof code that runs at type checking time. view@ is a ATS keyword that gets the proof for an object if we can access it. Once we get a proof using view@ we need to return it later else the proof checking fails. This is a form of 'borrowing' - we are borrowing the proof to access the internal object and we have to give it back later otherwise we can't access that field again and the compiler complains if we don't give it back.

The main part of the return value is:

(pf, llam pf => let
              prval () = view@ (x->b) := pf
            in
              view@ (!x)
            end
        | p)]

It is a tuple, with two proof arguments and a value. The first proof argument is the proof of int @ someaddress to access the internal field we got earlier using view@. The second is a linear lambda function. That function is what returns the proof back to the view@ for the internal field in this line:

prval () = view@ (x->b) := pf

This is the returning of the proof we borrowed earlier. The linear lambda function returns the view@ of the original X object. The value being returned p is the pointer to the internal field.

The [.. | ] syntax is a way of telling the compiler that the return value is of a particular existential type and that it should compute it from the context of the expression such that it matches the [l2:addr] existential result type we declared in the function declaration. Ideally the compiler would infer this without our help but unfortunately it does not. It will tell us if we get it wrong though.

Usage of the function looks like:

implement main() = let
  var x: X = @{ a=1, b=2 }
  val (pf, pff | p) = get_b_ref(view@ x | &x)
  val _ = !p := 3
  prval () = view@ x := pff(pf)
in
  println!("x.b = ", x.b)
end

A variable x is declared on the stack. We get a pointer to the b field using the function above and set that field via the pointer. We're allowed to do this because we have a proof that p is a pointer to an int via the pf proof variable. Calling get_b_ref makes x no longer usable as get_b_ref consumed the proof to access it. The proof function, pff is used to return the proof to access X so we can then use x again. After that we can't use the internal pointer to b anymore. The compiler prevents the escape of the proof to access the internal pointer from the lifetime of the stack variable x.

Unfortunately this code doesn't work in ATS 2. The equivalent code, which fails to compile, is:

fun get_b_ref {l:addr} .<>. (pf: X @ l | x: ptr l):<>
                            [l2:addr] (int @ l2, int @ l2 -<lin> X @ l | ptr l2) =
let
  val p = addr@(x->b)
  prval pf = view@ (x->b)
in
  (pf, llam (pf) => let
          prval () = view@ (x->b) := pf
        in
          view@ (!x)
        end
    | p)
end

ATS 2 does not allow the use of view@ (!x) inside the lambda if the x comes from outside the lambda as it does in this case. Maybe this will change in future iterations of ATS 2 but currently it does not. One workaround is to resort to embedded C code:

#include "share/atspre_define.hats"
#include "share/atspre_staload.hats"

viewtypedef X = @{
  a= int,
  b= int
}
extern viewtypedef "X" = X

%{
int* get_b_ref(X* x) {
  return &x->atslab__b;
}
%}

extern fun get_b_ref {l:addr} (pf: X @ l | x: ptr l):<>
         [l2:addr] (int @ l2, int @ l2 -<lin> X @ l | ptr l2) = "mac#get_b_ref"

implement main0() = let
  var x: X = @{ a=1, b=2 }
  val (pf, pff | p) = get_b_ref(view@ x | addr@ x)
  val _ = !p := 3
  prval () = view@ x := pff(pf)
in
  println!("x.b = ", x.b)
end

Here the X object is given a name "X" that can be accessed from C using the line:

extern viewtypedef "X" = X

The get_b_ref function is implemented in C and given an extern definition that matches the previous one. Although the C implementation is effectively 'unsafe' code, any calls of get_b_ref from ATS is safe as the type defines exactly what it does. This is an example of implementing a typesafe kernel around potentially unsafe code. In this case there is no runtime overhead as the typesafe kernel is proof code.

Which approach to use depends on the project. Often the overhead of implementing safe pointer code in ATS in terms of satisfying the proofs of the implementation is too much at the time. Embedded C with a safe external definition is a faster approach to get things done. Later the C code could be replaced with proof-correct ATS as desired.

Hopefully this gives some insight in what ATS can do, and gives some appreciation of the simplicity of the approach Rust has taken. ATS tends towards providing the low level tools to build the level of access you want but this has cognitive and syntactic overhead. Once you're famliar with the system it becomes understandable but it's probably not "new user friendly" like Rust is.

Tags: ats 


This site is accessable over tor as hidden service 6vp5u25g4izec5c37wv52skvecikld6kysvsivnl6sdg6q7wy25lixad.onion, or Freenet using key:
USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/-61/


Tags

Archives
Links