Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


2017-01-19

Transitioning from ATS1 to ATS2

ATS is a programming language I've used a lot in the past. A new version was released a while ago, ATS2, that resulted in various library and language changes and I never got around to transitioning my code to it. Many of my existing posts on ATS are still relevant but some syntax has changed and many of my examples won't compile in the new system. I'm playing around with ATS 2 now and will note some of the major differences I've encountered in this post to get some of my existing ATS1 examples to build.

The first thing to note is that there's some extensive documentation for ATS2 compared to when I first started learning. This was very helpful in learning the differences.

Starting with a simple 'hello world' program:

implement main0() = print("Hello World\n")

Put in a 'hello.dats' file this can be compiled with:

$ patscc -o hello hello.dats
$ ./hello
Hello World

Notice that the main entry function is called main0. In ATS1 this was main. In ATS2 the use of main is to mirror the C main function in that it can take arguments and return a value:

#include "share/atspre_define.hats"
#include "share/atspre_staload.hats"

implement main(argc, argv) =
  if argc = 2 then
    (print("Hello "); print(argv[1]); print_newline(); 0)
  else
    (print("This program requires an argument.\n"); 1)

main takes the traditional argc and argv arguments that the C language main function takes. The function must return an integer value - in this example 0 for success and 1 for failure. Note the two #include statements. These are boilerplate used by ATS to enable the ATS compiler atsopt to gain access to certain external library packages and the definitions of various library functions.

$ patscc -o hello1 hello1.dats
$ ./hello1
This program requires an argument.
$ ./hello1 World
Hello World

In my Linear Datatypes in ATS article I showed an example of declaring a datatype and folding over it:

datatype list (a:t@ype) =
  | nil (a)
  | cons (a) of (a, list a)

fun {seed:t@ype}{a:t@ype} fold(f: (seed,a) -<> seed, s: seed, xs: list a): seed =
  case+ xs of
  | nil () => s
  | cons (x, xs) => fold(f, f(s, x), xs)

implement main() = let
  val a = cons(1, cons(2, cons(3, cons(4, nil))))
  val b = fold(add_int_int, 0, a)
  val c = fold(mul_int_int, 1, a)
  val () = printf("b: %d c: %d\n", @(b, c))
in
  ()
end

This won't compile in ATS2 due to:

  • add_int_int and mul_int_int don't exist in ATS2. They are replaced with g0int_add_int and g0int_mul_int.
  • printf doesn't exist in ATS2. I don't know of a replacement for this other than multiple print statements.
  • main needs to be main0.

The working ATS2 version becomes:

#include "share/atspre_define.hats"
#include "share/atspre_staload.hats"

datatype list (a:t@ype) =
  | nil (a)
  | cons (a) of (a, list a)

fun {seed:t@ype}{a:t@ype} fold(f: (seed,a) -<> seed, s: seed, xs: list a): seed =
  case+ xs of
  | nil () => s
  | cons (x, xs) => fold(f, f(s, x), xs)

implement main0() = let
  val a = cons(1, cons(2, cons(3, cons(4, nil))))
  val b = fold(g0int_add_int, 0, a)
  val c = fold(g0int_mul_int, 1, a)
  val _ = print("b: ")
  val _ = print_int (b)
  val _ = print (" c: ")
  val _ = print_int (c)
  val _ = print_newline()
in
  ()
end

Compiling this fails at the link stage however:

$ patscc -o test test.dats
/tmp/ccfgUdP6.o: In function `mainats_void_0':
t2_dats.c:(.text+0x138): undefined reference to `atsruntime_malloc_undef'
t2_dats.c:(.text+0x15c): undefined reference to `atsruntime_malloc_undef'
t2_dats.c:(.text+0x180): undefined reference to `atsruntime_malloc_undef'
t2_dats.c:(.text+0x1a4): undefined reference to `atsruntime_malloc_undef'
collect2: error: ld returned 1 exit status

The reason for this is the example uses dynamic memory allocation. ATS2 requires a command line argument to tell it to use the libc functions for allocating and freeing memory:

$ patscc -DATS_MEMALLOC_LIBC -o test test.dats
$ ./test
b: 10 c: 24

The same article shows a linear version of the datatype using dataviewtype:

dataviewtype list (a:t@ype) =
  | nil (a)
  | cons (a) of (a, list a)

fun {a:t@ype} free(xs: list a): void =
  case+ xs of
  | ~nil () => ()
  | ~cons (x, xs) => free(xs)

fun {seed:t@ype}{a:t@ype} fold(f: (seed,a) -<> seed, s: seed, xs: !list a): seed =
  case+ xs of
  | nil () => (fold@ xs; s)
  | cons (x, !xs1) => let val s = fold(f, f(s, x), !xs1) in fold@ xs; s end

implement main() = let
  val a = cons(1, cons(2, cons(3, cons(4, nil))))
  val b = fold(add_int_int, 0, a)
  val c = fold(mul_int_int, 1, a)
  val () = printf("b: %d c: %d\n", @(b, c))
in
  free(a)
end

Here the list type is linear. References to a list value can't be aliased and they must be explicitly consumed. In ATS2 this code now looks like:

#include "share/atspre_define.hats"
#include "share/atspre_staload.hats"

datavtype list (a:t@ype) =
  | nil (a)
  | cons (a) of (a, list a)

fun {a:t@ype} free(xs: list a): void =
  case+ xs of
  | ~nil () => ()
  | ~cons (x, xs) => free(xs)

fun {seed:t@ype}{a:t@ype} fold(f: (seed,a) -<> seed, s: seed, xs: !list a): seed =
  case+ xs of
  | nil () => (s)
  | cons (x, xs1) => let val s = fold(f, f(s, x), xs1) in s end


implement main0() = let
  val a = cons(1, cons(2, cons(3, cons(4, nil))))
  val b = fold(g0int_add_int, 0, a)
  val c = fold(g0int_mul_int, 1, a)
  val _ = print("b: ")
  val _ = print_int (b)
  val _ = print (" c: ")
  val _ = print_int (c)
  val _ = print_newline()
in
  free a
end

This is mostly the same but the implementation of fold is simplified. There is no longer a need to use the "!" suffix when declaring the xs1 variable in the pattern match and the fold@ is not needed. In the cases where it is needed then an @ syntax seems to be used in the pattern match for ATS2:

fun {seed:t@ype}{a:t@ype} fold(f: (seed,a) -<> seed, s: seed, xs: !list a): seed =
  case+ xs of
  | nil () => (s)
  | @cons (x, xs1) => let val s = fold(f, f(s, x), xs1) in fold@ xs; s end

There is some discussion about this in the ATS tutorial but I need to look into it in detail to work out where it is needed now.

Another difference in this example is using datavtype instead of dataviewtype. The ATS2 examples I've seen consistently use the shorter word but the longer one still seems to work.

ATS2 makes more extensive use of templates. One thing that I found a pain point in ATS1 was dealing with higher order functions. ATS has multiple function types. There are functions that have no external environment (eg. plain C functions), closures which contain a pointer to an environment, linear closures where the closure envionment must be explicitly free'd, etc. (see closures in ATS for some examples). Writing higher order functions like map and filter would often require implementing a version for each type of function with duplicated boilerplate code.

The ATS2 library uses function templates specialized within the implementation of higher order functions to deal with this boilerplate. Here is an example of the technique adapted from Programming in ATS:

#include "share/atspre_define.hats"
#include "share/atspre_staload.hats"

datatype list (a:t@ype) =
  | nil (a)
  | cons (a) of (a, list a)

extern fun{a:t@ype}{b:t@ype} map$impl(x: a): b

fun{a:t@ype}{b:t@ype} map(xs: list a): list b = let
  fun loop(xs: list a): list b =
    case+ xs of
    | nil () => nil ()
    | cons(x, xs) => cons(map$impl<a><b>(x), loop(xs))
in
  loop(xs)
end

This implements a 'map' template function that takes a list of type a and returns a list of type b. It iterates over the list calling another template function called map$impl on each element and conses up the result. The map$impl function takes an a and returns a b. It is left unimplemented so far. The following is an implementation of a map that applies a function over the list:

fun{a:t@ype}{b:t@ype} map_fun(xs: list a, f: a -<fun> b) = let
  implement map$impl<a><b> (x) = f(x)
in
  map<a><b>(xs)
end

Here the map$impl function is implemented internally to map_fun to call the passed in function. The version that uses a closure does the same:

fun{a:t@ype}{b:t@ype} map_clo(xs: list a, f: a -<cloref> b) = let
  implement map$impl<a><b> (x) = f(x)
in
  map<a><b>(xs)
end

In this way the code that does the looping and mapping is implemented once in the map function and shared amongst the implementations that take the different function types. The caller doesn't need to know about templates to use it:

implement main0() = let
  val n = 5
  val a: list int = cons(1, cons(2, cons(3, cons(4, nil))))
  val b = map_fun(a, lam(x) => x * 2)
  val c = map_clo(a, lam(x) => x * n)

in
  ()
end

A lot of the ATS library implementation seems to take this approach, with template specialisation being used to extend it.

There are other additions to ATS2 that I haven't looked at yet. The constraint solver can be replaced with an external solver like Z3. There are backends for other languages, including JavaScript. There's a Try ATS online service for trying out the language and support for a C++ FFI.

I probably won't rewrite my existing ATS1 posts to be ATS2 compatible but will write about any changes needed to get things to compile if they're non-obvious.

Tags: ats 

2017-01-18

Running X11 apps in an rkt container

rkt is a container runtime I've been using on a few projects recently. I was creating a container for Mozart which uses emacs as an IDE. This requires running an X11 application within the container and have it displayed on the host display.

To get this working I needed to mount my hosts X11 unix domain socket inside the container and provide an Xauthority file that gave the container the rights to connect to the host X server.

The following shell commands use acbuild to create a container that runs xclock as an example of the process:

acbuild begin docker://ubuntu:16.04
acbuild set-name bluishcoder.co.nz/xclock
acbuild run -- apt install --no-install-recommends --yes x11-apps
acbuild run -- rm -rf /var/lib/apt/lists/*
acbuild environment add DISPLAY unix$DISPLAY
acbuild environment add XAUTHORITY /root/.Xauthority
acbuild mount add x11socket /tmp/.X11-unix
acbuild mount add x11auth /root/.Xauthority
acbuild set-exec xclock
acbuild write --overwrite xclock.aci
acbuild end

It uses an Ubuntu Linux image from the Docker hub as a base and installs x11-apps. To reduce disk space it removes cached package files afterwards. A DISPLAY environment variable is set to point to use the same DISPLAY as the host. The XAUTHORITY enviroment variable is set to point to a file in the home directory of the root user in the container.

The mount subcommands expose the x11socket and x11auth endpoints to point to where the X11 unix domain socket and the Xauthority file are expected to be. These will be provided by the rkt invocation to mount host resources in those locations.

The final part of the script sets the executable to be xclock and writes the aci file.

On the host side we need to create an Xauthority file that provides the container access to our X11 server. This file needs to be set so that any hostname can connect to the X11 server as the hostname for the container can change between invocations. To do this the authentication family in the file needs to be set to FamilyWild. I got the steps to do this from this stack overflow post:

xauth nlist :0 | sed -e 's/^..../ffff/' | xauth -f myauthority nmerge -

This will retrieve the Xauthority information for display :0 and modify the first four bytes to be ffff. This sets the authority family to FamilyWild. A new file called myauthority is created with this data. This file will be mapped to the x11auth mount point in the container.

The container can be executed with rkt:

rkt run --insecure-options=image xclock.aci \
        --volume x11socket,kind=host,source=/tmp/.X11-unix \
        --volume x11auth,kind=host,source=./myauthority

The --volume command line arguments map the mount points we defined in the acbuild commands to locations on the host. The running xclock application should now appear on the host X11 display.

Tags: rkt 

2016-08-30

Kicking the tires of Shen Prolog

Shen is a lisp based programming language with some interesting features. Some of these include:

It's the last point I want to look at in this post. I've been doing some programming in Prolog recently and wanted to explore how to do similar things in Shen. This post compares some prolog examples to the equivalent Shen prolog to compare the differences.

Installing

There are quite a few options for installing Shen. For this post I used shen-scheme running on chibi-scheme.

First I installed Chibi:

$ git clone https://github.com/ashinn/chibi-scheme/
$ cd chibi-scheme
$ sudo make install

Then shen-scheme:

$ git clone https://github.com/tizoc/shen-scheme
$ cd shen-scheme
$ make
$ chibi-scheme -Rshen.runner
Shen, copyright (C) 2010-2015 Mark Tarver
www.shenlanguage.org, Shen 19.2
running under Scheme, implementation: chibi-scheme
port 0.14 ported by Bruno Deferrari

(0-) 

There is a JavaScript port of Shen that is available online to try that may work for these examples depending on browser support. I was able to use it in Firefox.

For prolog I use SWI Prolog. This comes with many great libraries and examples to use prolog for 'real world' use cases. The examples can be tried online using Swish.

Member of a list

A popular example of using prolog is the member function (here I use mem to not clash with a builtin member function):

mem(X, [X|_]).
mem(X, [_|Y]) :- mem(X, Y).

In prolog syntax a variable is in uppercase and a list is destructured into its head and tail using the syntax [Head | Tail]. The first rule states that X is a member of a list if it appears at the head of the list. The second rule states that X is a member of a list if it is member of the tail of the list.

Some examples of membership testing (?- is the prolog prompt):

?- mem(1,[1,2,3]).
true

?- mem(3,[1,2,3]).
true

Thanks to the magic of prolog it's possible to use the same function to enumerate all items of the list:

?- mem(X,[1,2,3]).
X = 1 ;
X = 2 ;
X = 3.

In this case the prolog toplevel first gives the result X = 1 with a prompt to continue. Using ; prints each result in turn. The query can be read as "What value of X, given the following list, will return true for a call to mem".

The prolog function findall can be used to return a list of all results without needing interaction from the toplevel:

?- findall(X, mem(X, [1,2,3]), Y).
Y = [1, 2, 3].

findall binds to the third argument a list containing all the values of the first argument when substituted in the query in the second argument that result in true. In this case a 1, 2 or 3 value of X results in mem(X, [1,2,3]) being true.

Shen prolog uses s-expression syntax and requires a defprolog construct to introduce rules. The equivalent to the mem function above would be:

(defprolog mem
  X [X|_] <--;
  X [_|Y] <-- (mem X Y);
)

Shen uses '<--' in place of the prolog ':-' syntax, ';' in place of '.' and calls are made in s-expression format. Calling prolog definitions requires the use of prolog?. The membership testing examples become (the (1-), etc are Shen prompts):

(1-) (prolog? (mem 1 [1 2 3]))
true

(2-) (prolog? (mem 3 [1 2 3]))
true

Shen has the equivalent of prolog's findall:

(3-) (prolog? (findall X [mem X [1 2 3]] Y)
              (return Y))
[3 2 1]

Note that the second argument to findall needs to be an atom. In this case, a list (hence the square brackets), not an s-expression function call. To get the actual results of the Y variable the return function is used. return terminates a prolog query and returns the value of its argument.

Appending lists

Another common prolog example is appending two lists. Given the call append(X, Y, Z), Z will contain the result of appending Y to X:

app([],Y,Y).
app([H|T],Y,[H|T2]) :- app(T, Y, T2).

This states that appending an empty list to a second list results in the second list. Appending a list X and Y will put the head of X at the head of the result and recursively call append with the tail of the first list, the original second list and the currently unbound tail of the result list.

All three arguments can be unbound when calling app. Here are some examples:

# Is [1] appended to [2] the list [1,2]
?- app([1],[2],[1,2]).
true.

# What is the result of appending [1,2,3] and [4,5,6]
?- app([1,2,3],[4,5,6],X).
X = [1, 2, 3, 4, 5, 6].

# What list, when appended to [7,8,9] produces [4,5,6,7,8,9]
?- app(X,[7,8,9],[4,5,6,7,8,9]).
X = [4, 5, 6] ;

# What lists can be append to each other to produce [1,2,3]
?- findall([X,Y], app(X, Y, [1,2,3]), Z).
Z = [
      [[],        [1, 2, 3]],
      [[1],       [2, 3]],
      [[1, 2],    [3]],
      [[1, 2, 3], []]
    ].

The direct translation to Shen prolog is:

(defprolog app
  [] Y Y <--;
  [H|T] Y [H|T2] <-- (app T Y T2);
)

/* Is [1] appended to [2] the list [1 2] */
(4-) (prolog? (app [1] [2] [1 2]))
true

/* What is the result of appending [1,2,3] and [4,5,6] */
(5-) (prolog? (app [1 2 3] [4 5 6] X)
              (return X))
[1 2 3 4 5 6]

/* What list, when appended to [7,8,9] produces [4,5,6,7,8,9] */
(6-) (prolog? (app X [7 8 9] [4 5 6 7 8 9])
              (return X))
[4 5 6]

/* What lists can be append to each other to produce [1,2,3] */
(7-) (prolog? (findall [X Y] [app X Y [1 2 3]] Z)
              (return Z))
[[[1 2 3] []]
 [[1 2]   [3]]
 [[1]     [2 3]]
 [[]      [1 2 3]]
]

Factorial

One way of defining factorial in Prolog is:

fac(0,1).
fac(N,R) :- N > 0,
            N1 is N - 1,
            fac(N1, R1),
            R is N * R1.

The is predicate in prolog is used for binding the result of arithmetic expressions to a variable. Use of fac looks like:

# Is 120 the factorial of 5
?- fac(5,120).
true ;

# What is the factorial of 10?
?- fac(10, X).
X = 3628800 ;

# What factorial gives the result of 5040?
?- fac(X, 5040).
ERROR: >/2: Arguments are not sufficiently instantiated

Note that in this case the backtracking to compute the answer to the last question was not possible. This is because of the way prolog handles boolean and arithmetic expressions. They are non-logical operations and are calculated using an arithmetic evaluator using efficient system arithmetic. Many prolog systems provide a way around this using constraint logic programming which I'll demonstrate later.

Translating to Shen prolog:

(defprolog fac
  0 1 <--;
  N R <-- (when (> N 0))
          (is N1 (- N 1))
          (fac N1 R1)
          (is R (* N R1));
)

The boolean comparison is introduced using the Shen prolog when function. The is predicate becomes the is function. Other than this the translation is straightfoward as is usage:

# Is 120 the factorial of 5
(8-) (prolog? (fac 5 X) (identical X 120))
true

# What is the factorial of 10?
(9-) (prolog? (fac 10 X) (return X))
3628800

# What factorial gives the result of 5040?
(10-) (prolog? (findall X [fac X 5040] Y) (return Y))
invalid type, expected Number: (#(shen.pvar 2))

Note the use of identical in the first example. My first attempt at trying the direct prolog translation gave the following:

(8-) (prolog? (fac 5 120))
vector-ref: not a vector: (120)

I raised a question on the mailing list about it and the answer was that is in Shen Prolog was specifically designed to bind a variable and it has found a number and not a variable. The use of identical was suggested for the solution.

In the third example backtracking doesn't work with arithmetic expressions in Shen prolog as in the Prolog example.

As mentioned previously, many prolog systems include constraint logic programming libraries that can be used to provide backtracking with arithmetic. This is the equivalent factorial programming using one such library in SWI Prolog:

:- use_module(library(clpfd)).
fac(0,1).
fac(N,R) :- N #> 0,
            N1 #= N - 1,
            R #= N * F1,
            fac(N1, F1).

?- fac(X,5040).
X = 7 ;

Databases

A common use of prolog is querying databases of facts. An example from here presents some bible facts and queries:

parent(abraham,ismael).
parent(abraham,isaac).
parent(isaac,esau).
parent(isaac,iacob).

# Did Abraham have children
?- parent(abraham, _).
true ;

# All the father/son pairs
?- findall([F, S], parent(F, S), Y).
Y = [[abraham, ismael], [abraham, isaac], [isaac, esau], [isaac, iacob]].

# Find the grandchildren of abraham
?- findall(G, (parent(abraham, X), parent(X, G)), Y).
Y = [esau, iacob].

Translating to Shen prolog:

(defprolog parent
  abraham ismael <--;   
  abraham isaac <--;
  isaac esau <--;
  isaac iacob <--;
)

/* Did Abraham have children */
(11-) (prolog? (parent abraham _))
true

# All the father/son pairs
(12-) (prolog? (findall [F S] [parent F S] Y) (return Y))
[[isaac iacob] [isaac esau] [abraham isaac] [abraham ismael]]

# Find the grandchildren of abraham
(defprolog grandchild
  F G <-- (parent F X) (parent X G);
)

(13-) (prolog? (findall G [grandchild abraham G] Y) (return Y))
[iacob esau]

In the last example I had to add a new relation for grandchild as I could not see a way of including multiple clauses in findall as the prolog example did.

Calling Prolog to Shen and vice versa

Calling from prolog to Shen can be done with the is function as shown previously with arithmetic:

(defprolog hi
  N <-- (is _ (output "Hello ~S~%" N));
)

(14-) (prolog? (hi "World"))
Hello "World"

It's also possible to use return to return the result of a Shen expression to the top level:

(defprolog double
  X <-- (return (* X 2));
)

(15-) (prolog? (double 5))
10

To pass values from Shen to prolog in variables, use receive to bind a prolog variable with the equivalent shen variable. Given the Shen prolog version of factorial earlier it can be called from a Shen function as follows:

(define shen-fac
  N -> (prolog? (receive N)
                (fac N R)
                (return R)))
shen-fac

(16-) (shen-fac 5)
120

Shen functions are introduced using define and arguments are pattern matched similar to ML-like languages. The arguments to pattern match against appear first, followed by -> and then the body to be run if that pattern matched. In this case the shen-fac function is defined to take one argument, N.

Palindromes

This one is a bit of a prolog mind bender and makes use of difference lists. The goal is to have a predicate, palindrome, that is true if the list given to it is a palindrome. That is, the list is the same when read forwards or backwards. The description of the solution is based on this stackoverflow answer.

In a difference list a list is represented as the difference between a normal list and some sublist of the tail of that list that is unknown. So if (X,Y) represents a difference list of list X that has some tail Y then an empty list would be represented as (Xs, Xs). That is, if Xs is the tail of a difference list Xs, then the actual list it represents is the empty list. A single element dffference list is ([X|Xs], Xs) which is a list containing X as the head and any tail, where the difference is that tail. Notice the tails here are unbound - they have no actual concrete representation and any tail would work.

Difference lists are useful for things like efficient concatenation since only binding the tail of a list is needed to concatenate. Given a difference list ([1 2 3|X], X) and ([4 5 6|Y], Y), the concatenation of the two only requires binding X to [4 5 6|Y], giving the difference list ([1 2 3 4 5 6|Y],Y). No copying of lists needs to occur and the concatenation is done in constant time.

In the following example the idea is used to determine if a list is a palindrome. Here is the prolog code:

palindrome(L, L).
palindrome([_|L], L).
palindrome([E|Xs], Tail)  :- palindrome(Xs, [E|Tail]).

palindrome takes two arguments to represent the difference list. The first clause states that the difference list (L,L) is always a palindrome. A difference list where its tail is the list itself is an empty list as described previously and an empty list is always a palindrome.

The second clause states that a single element list is always a palindrome. As noted before, a difference list ([X|Xs], Xs) is a single element list containing X.

The third clause is describing that for a list with more than one element to be a palindrome it must be of the form E ... E, where E is a single element and ... is the elements between them and those elements are also a palindrome. Using difference lists the list will have some tail, Tail. This means that E ... E | Tail is a palindrome if ... is a palindrome and Tail is the ignored part of difference list to be subtracted.

So the head of the third clause obtains the first element of the list with [E|Xs] and some Tail for the difference. It states that this is a palindrome if the difference list (Xs, E|Tail) is also a palindrome. And this matches our explanation in the previous paragrah - E ... E | Tail is a palindrome if ... is a palindrome where ... is Xs in our clause.

Using difference lists to solve palindromes allows efficiently pattern matching against the last item of the list - the Tail part of the difference list is effectively used as a pointer to the end of the list so the match against E ... E can be done.

Prolog usage example:

?- palindrome([1,2,2,1],[]).
true ;

?- palindrome([1],[]).
true ;

?- palindrome([1,2,3],[]).
false.

Prolog can even generate solutions interactively:

?- palindrome(X, []).
X = [] ;
X = [_4402] ;
X = [_4402, _4402] ;
X = [_4402, _4414, _4402] ;
X = [_4402, _4414, _4414, _4402] ;
X = [_4402, _4414, _4426, _4414, _4402] ;
X = [_4402, _4414, _4426, _4426, _4414, _4402] .

Note how the list contains anonymous elements but are palindromes.

The translation to Shen is:

(defprolog palindrome
  L L <--;
  [_|L] L <--;
  [E|Xs] Tail <-- (palindrome Xs [E|Tail]);
)

(17-) (prolog? (palindrome [1 2 2 1] []))
true

(18-) (prolog? (palindrome [1] []))
true

(19-) (prolog? (palindrome [1 2 3] []))
false

I've not found a way to interactively generate solutions as in the prolog example.

Wrapping the Shen prolog definition of palindrome into a Shen function for easy use from programs can be done with:

(define palindrome?
  L -> (prolog? (receive L) (palindrome L [])))

(20-) (palindrome? [a b b a])
true

Conclusion

There are other features of Shen prolog, including an equivalent of cut, other builtin functions and the ability to suspend unification and use pattern matching in places for efficiency. These are described in The Book of Shen.

The typechecker for Shen is implemented using the prolog facilities in Shen.

For more information on Shen, try:

Tags: shen  prolog 

2016-08-18

Using Freenet over Tor

This post outlines a method of using Freenet over Tor based on posts I wrote on my Freenet hosted blog and subsequent discussions about it. If you read my Freenet hosted blog there's little new here, I'm just making it available on my non-freenet blog.

One issue I've had with Freenet is that it exposes your IP address to peers. Recent law enforcement efforts to monitor Freenet have shown that they have been able to obtain search warrants based on logging requests for blocks of known data and associating them with IP addresses. If law enforcement can do this, so can random bad people.

You can avoid exposing your IP address to random strangers on opennet by using darknet but even then you have to trust your friends aren't monitoring your requests. If it was possible to run Freenet over Tor hidden services then only the hidden service address would be exposed using this logging method. A problem is that Freenet uses UDP which Tor does not support.

A recent post on the Freenet development mailing list pointed out that onioncat provides a virtual network over Tor and tunnels UDP. Using the steps they provided, and some tweaks, it's possible to set up a darknet node that doesn't expose its IP address. It uses the onioncat generated IPv6 address for communicating with peers - and this address is backed by a Tor hidden service.

The steps below outline how to set this up. Note that this is quite experimental and requires care to not expose your IP address. There are some Freenet issues that make things difficult so you should be aware that you do this at your risk and understand it may still expose your identity if things go wrong.

I'm assuming a Debian/Ubuntu like system for the steps.

Install Tor

Install Tor:

$ sudo apt-get install tor

Edit the /etc/tor/torrc file to enable a Hidden Service with an entry like:

HiddenServiceDir /var/lib/tor/freenet/
HiddenServicePort 8060 127.0.0.1:8060

Restart Tor and find your hidden service hostname:

$ sudo systemctl restart tor
$ sudo cat /var/lib/tor/freenet/hostname

Install onioncat

Install onioncat:

$ sudo apt-get install onioncat

Edit /etc/default/onioncat and change the lines matching the following:

ENABLED=yes
DAEMON_OPTS="-d 0 hiddenservicename.onion -U"

Restart onioncat:

$ sudo systemctl stop onioncat
$ sudo systemctl start onioncat

Find your onioncat IP address with:

$ ocat -i hiddenservicename.onion

Install Freenet

Install Freenet in the usual way and go through the browser based setup wizard. Choose "Details settings: (custom)" for the security option. On the subsequent pages of the wizard:

  • Disable the UPnP plugin.
  • Choose "Only connect to your friends"
  • Choose "High" for "Protection against a stranger attacking you over the internet"
  • Click the "I trust at least one person already using Freenet" checkbox.
  • For "Protection of your downloads..." pick any option you want.
  • Pick a node name that your darknet friends will see.
  • Pick a datastore size that you want.
  • Choose the bandwidth limit.

The node will now be started but have no connections. There will be warnings about this.

Configure Freenet over Tor

The following settings need to be changed in "Configuration/Core Settings" - make sure you have clicked "Switch to advanced mode".

  • Change "IP address override" to your onioncat IP address retrieved in the previous section.
  • Apply the changes.

Shut down Freenet and edit the wrapper.conf file in the Freenet installation directory. Change the line that contains java.net.preferIPv4Stack=true to java.net.preferIPv4Stack=false. In my wrapper.conf this is:

wrapper.java.additional.3=-Djava.net.preferIPv4Stack=false

Edit freenet.ini file in the Freenet installation directory. Change or add the following (replace "onioncat IP address" with the IP address obtained installing onioncat):

node.opennet.bindTo=onioncat IP address
node.bindTo=onioncat IP address
node.load.subMaxPingTime=2500
node.load.maxPingTime=5k

Save the file and restart Freenet. There might be a warning about "Unknown external address". Ignore this as you've explictly set one. I provide a patch later in this post if you want to get rid of the warning.

Add a friend

Now is the time to add a Darknet friend who is also using Tor/Onioncat. Go to "Friends/Add a friend". Choose your trust and ability to see other friends settings and enter a description of the friend. Paste their noderef in the "Enter node reference directly" box.

Give your noderef to your friend and have them add it. Once both connections have been added you should see "Connected" in the Friends list for that connection. The IP address should show the onioncat IPv6 address, beginning with "fd".

Optional Freenet patch

When running a Tor based node Freenet thinks the onioncat IP address is a local address. Some places in the Freenet code base check for this and reject it as a valid global routable address. In the FProxy user interface a large warning appears on each page that it couldn't find the external IP address of the node. The other issue is that local addresses aren't counted for bandwidth statistic reporting. The bandwidth box on the statistics page is empty as a result.

I use a patch, onioncat.txt, that provides a workaround for these two issues. The patch is optional as the node works without it but it's a useful improvement if you plan to run a Tor based node long term. You should check the patch before applying it blindly and assure that it's not doing anything nefarious.

Hybrid nodes

If you run a Tor based darknet node then at least one hybrid node must be in the darknet to bridge to the non-tor nodes. These hybrid nodes will have a public clearnet IP address exposed. I outline how to set up a hybrid node later below. For those that trust me, if you send a darknet tor noderef to me at the freemail address on the bottom of this page, or via normal email, I'll connect and send you a noderef of a hybrid node setup in this manner.

Install Tor and Onioncat as described previously. Install Freenet in the usual way and go through the browser based setup wizard. Choose "Details settings: (custom)" for the security option. On the subsequent pages of the wizard:

  • Enable or Disable the UPnP plugin as necessary depending on what you need for your clearnet connection to work.
  • Choose "Connect to strangers"
  • Choose "Low" or "Normal" security as desired.
  • For "Protection of your downloads..." pick any option you want.
  • Pick a datastore size that you want.
  • Choose the bandwidth limit.

The node will start and connect to opennet.

Shut down Freenet and edit the wrapper.conf file in the Freenet installation directory. Change the line that contains java.net.preferIPv4Stack=true to java.net.preferIPv4Stack=false. In my wrapper.conf this is:

wrapper.java.additional.3=-Djava.net.preferIPv4Stack=false

Edit freenet.ini file in the Freenet installation directory. Change or add the following:

node.load.subMaxPingTime=2500
node.load.maxPingTime=5k

Save the file and restart Freenet. If you base64 decode the "physical.udp" section of the noderef for the node you should see that it now contains the onioncat IP address as well as the public clearnet IP address.

Adding friends to this node will give those friends access to the wider Freenet datastore when they reciprocate.

Don't forget to check your noderefs to ensure that the ARK and the public IP address contain data you are willing to reveal. Check both the darknet noderef and the opennet noderef. You can decode the base64 of the "physical.udp" line with the GNU base64 command:

$ echo "physical.udp base64 here" |base64 -d

Final steps and caveats

Try visiting a Freenet index site and see if it loads. If it does then the Freenet over Tor setup is working. It will be slower than normal Freenet usage due to Tor latency. If you connect to more darknet nodes it will get faster.

When adding a friends noderef you can check what IP addresses it will connect to by looking at the "physical.udp" line. This is a base64 encoded list of IP addresses. You might want to check this to ensure that there are no clearnet addresses in there. If there is a clearnet address then it could deanonymize your node when it tries to connect to that in preference to the onioncat address.

The "ark.pubURI" portion of the noderef is an SSK that points to updated IP address information. A node can subscribe to the USK version of this and learn about IP address changes. Your friends node could change their IP address to a clearnet address resulting in you connecting to that.

To avoid the above two issues it's worthwhile running Freenet in a VM or container that does not have clearnet network access and only has access to the onioncat network setup. Alternatively you can use iptables to only allow onioncat traffic for the Freenet process or user running it.

The IP addresses exposed in the noderef include all local link addresses and their scopes. This is Freenet bug 6879. This may leak information you don't want leaked. It pays to check the "physical.udp" and "ark.pubURI" to see what you are exposing. Remember that any IP addresses exposed over the ARK is discoverable by looking at previous editions of the USK.

The traffic footprint of Freenet may make it easier to track down your IP address from your Tor ID. The volume of data and the nature of the traffic may make certain types of Tor de-anonymization techniques more effective.

Ideally it would be possible to have an opennet of Tor nodes so the exchange of darknet noderefs wouldn't be needed. I haven't been able to get this working yet but I'll continue to investigate it.

I've been running a Tor darknet node for the past week to test how well it works. With three darknet connections it runs well enough for browsing freesites. Sone and the Web of Trust took quite a while to bootstrap due to the lower speed but once it was running it works well. FMS and Flip are also usable. I'd expect performance to be even better with more connections.

Tags: freenet 

2016-08-12

Bundling Inferno Applications

Inferno can run as a standalone OS or hosted in an existing OS. In the latter case an emu executable runs and executes as a virtual operating system. An application written in Limbo runs inside this virtual OS. When distributing such an application you can install Inferno on the target machine with the application compiled code inside the filesystem of Inferno. This will include a bunch of stuff that the application may not need since it includes all the utilities for the operating system. To distribute the minimal amount of dependencies to run the Limbo application you can create a cut down Inferno distribution that only includes the application dependencies and run that. Another option is to bundle the root filesystem into the executable leaving only that executable to be distributed.

Application specific Inferno

To create an application specific Inferno install we need to find out what the dependencies are for the application. Most of the following comes from powerman's post on the topic in Russian.

When running emu it runs a program /dis/emuinit.dis to initialize the system and start the shell or other program. An application specific Inferno distribution would require:

  • emu
  • /dis/emuinit.dis
  • /dis/lib/emuinit.dis
  • Your application program files inside the Inferno directory structure somewhere.

Powerman's post goes into detail on how to do this with the following "Hello World" Limbo application:

implement HelloWorld;
include "sys.m";
include "draw.m";

HelloWorld: module
{
    init: fn(nil: ref Draw->Context, nil: list of string);
};

init(nil: ref Draw->Context, nil: list of string)
{
    sys := load Sys Sys->PATH;
    sys->print("Hello World!\n");
}

With this in a helloworld.b file, compiling it produces helloworld.dis. This contains the compiled bytecode for the Dis virtual machine. disdep will list the dependencies that it requires:

; limbo helloworld.b
; ./helloworld
Hello World!
; disdep helloworld.dis
; disdep /dis/emuinit.dis
/dis/lib/arg.dis
/dis/sh.dis
/dis/lib/bufio.dis
/dis/lib/env.dis
/dis/lib/readdir.dis
/dis/lib/filepat.dis
/dis/lib/string.dis

This shows the helloworld has no dependencies outside of what's already built into emu and emuinit.dis contains a few. Creating a directory layout with just these files and emu should be enough to run helloworld. In the following example foo is the directory containing the full Inferno distribution where helloworld.b was compiled.

$ mkdir -p hw/dis/lib
$ cd hw
$ cp ../foo/Linux/386/bin/emu .
$ cp ../foo/dis/emuinit.dis dis/
$ cp ../foo/dis/lib/arg.dis dis/lib/
$ cp ../foo/dis/lib/bufio.dis dis/lib/
$ cp ../foo/dis/lib/env.dis dis/lib/
$ cp ../foo/dis/lib/readdir.dis dis/lib/
$ cp ../foo/dis/lib/filepat.dis dis/lib/
$ cp ../foo/dis/lib/string.dis dis/lib/
$ cp ../foo/dis/sh.dis dis/
$ cp ../foo/usr/inferno/helloworld.dis .
$ ./emu -r. helloworld
Hello World!

Some of these dependencies aren't needed, for example sh.dis as we don't run the shell. See Powerman's post for a more extensive example that has dependencies.

Bundling application into emu

Instead of distributing a directory of files as in the previous example it's possible to bundle a root filesystem inside the emu program. This allows distributing just a single executable that runs the Limbo application. The steps to do this involve finding the dependencies of the program as above and creating a kernel configuration file that lists them.

The file /emu/Linux/emu is the kernel configuration file for the Linux Inferno VM. It has a root section which defines the root filesystem:

root
        /dev    /
        /fd     /
        /prog   /
        /prof   /
        /net    /
        /net.alt        /
        /chan   /
        /nvfs   /
        /env    /
#       /dis
#       /n
#       /icons
#       /osinit.dis
#       /dis/emuinit.dis
#       /dis/lib/auth.dis
#       /dis/lib/ssl.dis
#       /n/local /

The actual filesystem as located by the -r command line argument to emu is overlayed on top of this. Copying this file and adding our dependencies will bundle them into a custom build executable. This is what the root section of our helloworld looks like:

root
        /dev    /
        /fd     /
        /prog   /
        /prof   /
        /net    /
        /net.alt        /
        /chan   /
        /nvfs   /
        /env    /
        /dis
        /dis/emuinit.dis /usr/chris/helloworld.dis

Here we make helloworld.dis appear in the root filesystem as /dis/emuinit.dis, which is the first program run as the system comes up. Note that this file must use tabs, not spaces, for the entries. An executable can be compiled with this bundled root filesystem with:

$ cd emu/Linux
$ ...create helloworld configuration file...
$ mk CONF=helloworld

This produces an executable o.helloworld which when run will execute the helloworld.dis embedded inside it:

$ ./o.helloworld
Hello World!

When stripped the executable is about 1.5MB. This executable links to the X11 libraries. For a headless system you can remove the dependancy by using the emu-g kernel configuration file as a base. This removes the drivers that use X11 and prevents linking against the X11 libraries. The resulting executable when stripped is now 700K.

The executable produced by this process has some dynamic dependencies - libc, etc - that most glibc applications have. It should be possible to use musl-libc to produce a static binary and I'll cover this in another post.

Tags: inferno 


This site is accessable over tor as hidden service 6vp5u25g4izec5c37wv52skvecikld6kysvsivnl6sdg6q7wy25lixad.onion, or Freenet using key:
USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/-61/


Tags

Archives
Links