Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


2015-02-19

Spawning Windows Commands in Wasp Lisp and MOSREF

It's been a while since I last wrote about MOSREF and Wasp Lisp. MOSREF is the secure remote injection framework written in Wasp Lisp. It facilitates penetration testing by enabling a console node to spawn drone nodes on different machines. The console handles communication between nodes and can run lisp programs on any node.

The console can execute programs on other nodes with the input and output redirected to the console. One use for this is to create remote shells. MOSREF uses the Wasp Lisp function spawn-command. The implementation for this in Linux is fairly small and simple. On Windows drones it's somewhat more difficult. It's not implemented in current Wasp Lisp and attempting to use the sh command in MOSREF or the spawn-command function in Lisp fails with an error.

I've been meaning to try implementing this for quite a while and finally got around to it recently. I'm doing the work in the win_spawn branch of my github fork of WaspVM. With that version of Wasp Lisp and MOSREF built with Windows and Linux stubs available you can spawn Windows commands and capture the output:

>> (define a (spawn-command "cmd.exe /c echo hi"))
:: [win32_pipe_connection 5179A0]
>> (wait a)
:: "hi\r\n"
>> (wait a)
:: close

Bidirectional communication works too:

>> (define a (spawn-command "cmd.exe"))
:: [win32_pipe_connection 517770]
>> (wait a)
:: "Microsoft Windows ..."
>> (send "echo hi\n" a)
:: [win32-pipe-output 517748]
>> (wait a)
:: "echo hi\nhi\r\n\r\nE:\l>"
>> (send "exit\n" a)
:: [win32-pipe-output 517748]
>> (wait a)
:: "exit\n"
>> (wait a)
:: close

With that implemented and some minor changes to MOSREF to remove the check for Windows you can interact with remote Windows nodes. I made a quick video demonstrating this. There is no sound but it shows a linux console on the left and a windows shell on the right running in a VM.

I create a Windows drone and copy it to a location the Windows VM can access using the MOSREF cp command. This actually copies from the console where the drone was create to another Linux drone called tpyo. The Windows VM is running on the machine running tpyo and access the drone executable. This is run in the VM to connect to the console.

Once connected I run a few Lisp commands on the Windows node. The lisp is compiled to bytecode on the console, and the bytecode is shipped to the drone where it executes. The result then goes back to the console. This is all normal MOSREF operation and works already, I just do it to ensure things are working correctly.

Next I run a sh command which executes the command in the windows VM with the result sent back to view on the console. Then I do a typo which breaks the connection because of a bug in my code, oops. I recover the drone, reconnect, and run a remote shell like I originally intended. This spawning of commands on Windows is the new code I have implemented.

The video is available as mosref.webm, mosref.mp4 or on YouTube.

The implementation on Windows required a bunch of Win32 specific code. I followed an MSDN article on redirecting child process output and another on spawning console processes. This got the basic functionality working pretty quickly but hooking it into the Wasp Lisp event and stream systems took a bit longer.

Wasp uses libevent for asynchronous network and timer functionality. I couldn't find a way for this to be compatible with the Win32 HANDLE's that result from the console spawning code. I ended up writing derived connection, input and output Wasp VM classes for Win32 pipes that used Win32 Asynchronous RPC callbacks to avoid blocking reads. My inspiration for this was the existing Wasp routines to interact with the Win32 console used by the REPL.

A connection is basically a bidirectional stream where you can obtain an input and an output channel. A wait on an input channel receives data and a send on the output channel transmits data. When wait is called on the input channel a callback is invoked which should do the read. This can't block otherwise all Wasp VM coroutines will stop running. The callback instead sets a Win32 event which notifies a thread to read data and post the result back to the main thread via an asynchronous procedure call. A send on the output channel invokes another callback which does the write. Although this can technically block if the pipe buffers are full I currently call Write directly.

The Wasp VM scheduler has code that checks if there are any active processes running and can do a blocking wait on libevent for notification to prevent spinning a polling loop. This had the side effect of preventing the asynchronous procedure call from running as Windows only executes it at certain control points. I had to insert a check that although our reading process was de-scheduled waiting for the APC, it was in fact still around and needed the event loop to spin so a call to SleepEx occurs for the APC to run.

I'm still working on testing and debugging the implementation but it works pretty well as is. Before I submit a pull request I want to clean up the code a bit and maybe combine some of the duplicate functionality from the console handling code and the pipe code. I also need to check that I'm cleaning up resources correctly, especially the spawned reading/APC handling threads.

Some minor changes were needed to other parts of Wasp Lisp and those commits are in the github repository. They involve environment variable handling on Windows. First I had to enable the support for it, and then change it so on Windows the environment names were all uppercase. This avoided issues with Wasp looking for commands in PATH vs the Path that was set on my machine. On Windows these are case insensitive.

For building a Windows compatible Wasp VM stub and REPL I used a cross compiler on Linux. I used the gcc-mingw-w64 package in Ubuntu for this. Build wasp VM with:

$ ./configure --host=i686-w64-mingw32
$ OS=MINGW32 CC=i686-w64-mingw32-gcc make

This puts the Windows stub in the stubs directory. I copied this to the stubs directory of the node running the console so it could generate Windows drones. I had to build libevent for Windows using the same cross compiler and tweak the Wasp Makefile.cf to find it. Removing the -mno-cygwin flag was needed as well. I'll do patches to have the makefile work for cross compilation without changes if no one else gets to it.

Tags: waspvm 

2015-01-15

Decentralized Websites with ZeroNet

ZeroNet is a new project that aims to deliver a decentralized web. It uses a combination of bittorrent, a custom file server and a web based user interface to do this and manages to provide a pretty useable experience.

Users run a ZeroNet node and do their web browsing via the local proxy it provides. Website addresses are public keys, generated using the same algorithm as used for bitcoin addresses. A request for a website key results in the node looking in the bittorrent network for peers that are seeding the site. Peers are selected and ZeroNet connects to the peer directly to a custom file server that it implements. This is used to download the files required for the site. Bittorrent is only used for selecting peers, not for the site contents.

Once a site is retrieved the node then starts acting as a peer serving the sites content to users. The more users browsing your site, the more peers become available to provide the data. If the original site goes down the remaining peers can still serve the content.

Site updates are done by the owner making changes and then signing these changes with the private key for the site address. It then starts getting distributed to the peers that are seeding it.

Browsing is done through a standard web browser. The interface uses Websockets to communicate with the local node and receive real time information about site updates. The interface uses a sandboxed iframe to display websites.

Running

ZeroNet is open source and hosted on github. Everything is done through the one zeronet.py command. To run a node:

$ python zeronet.py
...output...

This will start the node and the file server. A check is made to see if the file server is available for connections externally. If this fails it displays a warning but the system still works. You won't seed sites or get real time notification of site updates however. The fix for this is to open port 15441 in your firewall. ZeroNet can use UPNP to do this automatically but it requires a MiniUPNP binary for this to work. See the --upnpc command line switch for details.

The node can be accessed from a web browser locally using port 43110. Providing a site address as the path will access a particular ZeroNet site. For example, 1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr is the main 'hello' site that is first displayed. To access it you'd use the URL http://127.0.0.1:43110/1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr.

Creating a site

To create a site you first need to shut down your running node (using ctrl+c will do it) then run the siteCreate command:

$ python zeronet.py siteCreate
...
- Site private key: ...private key...
- Site address: ...site address...
...
- Site created!

You should record the private key and address as you will need them when updating the site. The command results in a data/address directory being created, where 'address' is the site address that siteCreate produced. Inside that is a couple of default files. One of these, content.json, contains JSON data listing the files contained within the site and signing information. This gets updated automatically when you sign your site after doing updates. If you edit the title key in this file you can give your site a title that appears in the user interface instead of the address.

Another flie that gets modified during this site creation process is the sites.json file in the data directory. It contains the list of all the sites and some metadata about them.

If you visit http://127.0.0.1:43110/siteaddress in your browser, where siteaddress is the address created with siteCreate, then you'll see the default website that is created. If your node is peering successfully and you access this address from another node it will download the site, display it, and start seeding it. This is how the site data spreads through the network.

Updating a site

To change a site you must first store your files in the data/siteaddress directory. Any HTML, CSS, JavaScript, etc can be put here. It's like a standard website root directory. Just don't delete the config.json file that's there. Once you've added, modified or removed files you run the siteSign command:

$ python zeronet.py siteSign siteaddress
- Signing site: siteaddress...
Private key (input hidden):

Now you enter the private key that was displayed (and hopefully you saved) when you ran siteCreate. The site gets signed and information stored in config.json. To publish these changes to peers seeding the site:

$ python zeronet.py sitePublish siteaddress
...publishes to peers...

If your node is running it will serve the files from the running instance. If it is not then the sitePublish command will continue running to serve the files.

Deleting a site

You can pause seeding a site from the user interface but you can't delete it. To do that you must shutdown the node and delete the sites data/siteaddress directory manually. You will also need to remove its entry from data/sites.json. When you restart the node it will no longer appear.

Site tips

Because the website is displayed in a sandboxed iframe there are some restrictions in what it can do. The most obvious is that only relative URLs work in anchor elements. If you click on an absolute URL it does nothing. The sandboxed iframe has the allow-top-navigation option which means you can link to external pages or other ZeroNet sites if you use the target attribute of the anchor element and set it to _top. So this will work:

<a href="http://bluishcoder.co.nz/" target="_top">click me</a>

But this will not:

<a href="http://bluishcoder.co.nz/">click me</a>

Dynamic websites are supported, but requires help using centralized services. The ZeroNet node includes an example of a dynamic website called 'ZeroBoard'. This site allows users to enter a message in a form and it's published to a list of messages which all peering nodes will see. It does this by posting the message to an external web application that the author runs on the standard internet. This web app updates a file inside the sites ZeroNet directory and then signs it. The result is published to all peers and they automatically get the update through the Websocket interface.

Although this works it's unfortunate the it relies on a centralized web application. The ZeroNet author has posted that they are looking at decentralized ways of doing this, maybe using bitmessage or some other system. Something involving peer to peer WebRTC would be interesting.

Conclusion

ZeroNet seems to be most similar to tor, i2p or freenet. Compared to these it lacks the anonymity and encryption aspects. But it decentralizes the site content which tor and i2p don't. Freenet provides decentralization too but does not allow JavaScript in sites. ZeroNet does allow JavaScript but this has the usual security and tracking concerns.

Site addresses are in the same format as bitcoin addresses. It should be possible to import the private key into bitcoin and then bitcoins sent to the public address of a site would be accessed by the site owner. I haven't tested this but I don't see why it couldn't be made to work. Maybe this could be leveraged somehow to enable a web payment method.

ZeroNet's lack of encyption or obfuscation of the site contents could be a problem. A peer holds the entire site in a local directory. If this contains malicious or illegal content it can be accidentally run or viewed. Or it could be picked up in automated scans and the user held responsible. Even if the site originally had harmless content the site author could push an update out that contains problematic material. That's a bit scary.

It's early days for the project and hopefully some of these issues can be addressed. As it is though it works well, is very useable, and is an interesting experiement on decentralizing websites. Some links for more information:

Tags: mozilla  zeronet 

2015-01-08

Improving Linux Font Support in Self

In the Linux version of the Self programming language implementation the fonts used are standard X11 fonts. On modern Linux systems these don't look great and a common question asked in the mailing list is how to improve it. Fonts on the Mac OS X build of Self use a different system and they look much better. It would be good to convert the Linux version to use freetype to gain more control over fonts.

I worked on adding Freetype support a couple of years ago and wrote about it on the mailing list. I haven't done much on it since then but the code is in my github repository under the linux_fonts branch. That work adds Self primitives to access Freetype but does not integrate it into the Self font system. I hope to be able to continue this work sometime but I'm unlikely to get to it in the near future. This post is to point to the code, show how to use the primitives, in case someone else would like to take it forward as a project.

To try the code out you'll need to build Self in the usual manner but using that branch. Once the desktop is launched I test it by creating an object with the following slots:

(| parent* = traits oddball.
   window <- desktop w anyWindowCanvas.
   draw <-- nil.
   font <- nil.
   xftcolor <- nil.
   xrcolor <- nil.
 |)

With an outliner for this object I create an evaulator for it and, one at a time, evaluate the following code snippets:

draw: window display xftDrawCreate: window platformWindow
  Visual: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen

xftcolor: xlib xftColor new.
xrcolor: xlib xRenderColor new.
xrcolor alpha: 16rffff

window display xftColorAllocValue: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen
  RenderColor: xrcolor
  XftColor: xftcolor

font: window display xftFontOpenNameOnScreen: window display screen number
 Name: 'verdana-18'

draw xftDrawString8: xftcolor
  Font: font X: 100 Y: 100
  String: 'Hello World!'

The results in the text 'Hello World!' appearing at position 100@100 on the desktop in the Verdana font. The video below demonstrates this to show how it's done in the Self user interface:

This is a common workflow I use to prototype things in Self. I create an empty object and populate it with slots to hold data. With an evaluator created for that object these slots are accessible without needing to have an object to call them on. This is known as 'implicit self calls'. The message for the slot implicitly is sent to the current object. I create and remove slots as needed. I can use the Self outliner to drill down on the slots to look at and manipulate those objects specifically if needed.

Hopefully this Freetype support can be used as a base for better looking fonts on Linux. If you are keen to take it further, or have ideas on how to integrate it, I can be contacted using the details at the bottom of my blog, or you can raise it in the Self mailing list or on github.

Tags: self 

2014-12-31

Using Inferno OS on Linux

It's the end of the year and I'm going through some of the patches I have for projects and putting them online so I have a record of them. I'm working through things I did on Inferno OS.

Source respositories

The official Inferno source is hosted in a mercurial repository on Google Code. There is also a repository containing a snapshot of this with changes to get Inferno building on Android systems on bitbucket. Finally there is a tarball containing a snapshot which includes fonts required by Inferno that are not stored in the mercurial repository.

I carry a couple of custom patches to Inferno and I also have changes to the Android port that I keep in a patch file. When working on these I find it useful to be able to diff between the current Inferno source and the Hellaphone source to see what changes were made.

I've cleaned these up and put the patches in a github repository with a conversion of the mercurial repositories to git. The Hellaphone code is a branch of the repository makeing it easy to cherry pick and diff between that and the main source. I've put this in github.com/doublec/inferno.

There are four main branches in that repository:

  • 20100120 - Snapshot with the additional fonts
  • hellaphone - Hellaphone Android port
  • inferno - Direct import of the Inferno mercurial repository
  • master - Everything needed for building inferno on Linux. It is a recent working version of the inferno
    branch with the additional fonts from the `20100120` branch and any additional work in progress patches
    to work around existing issues.
    

I use master for running Inferno on desktop and hellaphone for phone development. The README on master explains how to build and the README on hellaphone has the steps to get it running on a phone.

Building on Linux

Building Inferno on Linux with the official source involves downloading an archive containing a snapshot of the source with the addition of some font files that are licensed differently. You then do a mercurial update to get the latest source code. Something like the following performs the build:

$ wget http://www.vitanuova.com/dist/4e/inferno-20100120.tgz
$ tar xvf inferno-20100120.tgz
$ cd inferno
$ hg pull -u
$ ...edit mkconfig so entries are set below...
ROOT=/path/to/inferno
SYSHOST=Linux
OBJTYPE=386
$ export PATH=$PATH:`pwd`/Linux/386/bin
$ ./makemk.sh
$ mk nuke
$ mk install

Building using my github repository requires:

$ git clone https://github.com/doublec/inferno
$ cd inferno
$ sh Mkdirs
$ .. edit `mkconfig` to so the following entries are set to these values...
ROOT=/root/of/the/inferno/git/clone
SYSHOST=Linux
OBJTYPE=386
$ export PATH=$PATH:`pwd`/Linux/386/bin
$ ./makemk.sh
$ mk nuke
$ mk install

Running Inferno

I use a shell script to run Inferno. The contents look like:

export PATH=$PATH:~/inferno/Linux/386/bin
export EMU="-r/home/$USER/inferno -c1 -g1920x1008"
exec emu $* /dis/sh.dis -a -c "wm/wm wm/logon -u $USER"

This starts Inferno using the same username as on the Linux system and with the desktop size set to 1920x1008. The JIT is enabled (the -c1 flag does this). For this to work I need to have a /usr/chris directory in the Inferno file system with some default files. This can be created by copying the existing /usr/inferno directory:

$ cd inferno
$ cp -r usr/inferno usr/chris

The patches in my master branch add a -F command line switch to the emu command to use full screen in X11. This is useful when using a window manager that doesn't allow switching to and from full screen. On Ubuntu I often run Inferno full screen in a workspace that I can switch back and forth to get between Linux and Inferno. This can be run with the above shell script (assuming it is named inferno and on the path):

$ inferno -F

Accessing the host filesystem and commands

Often I want to access the host filesystem from within Inferno. This can be done using bind with a special path syntax to represent the host system. I like to keep the path on Inferno the same as the path on Linux so when running host commands the paths in error messages match up. This makes it easier to click on error messages and load the file in Inferno text editors. I map /home as follows (';' represents an Inferno shell prompt):

; mkdir /home
; bind '#U*/home' /home

Now all the directories and files under /home in Linux are available under /home in Inferno.

The Inferno os command is used to run host programs from within Inferno. To run git for example:

; os -d /home/username/inferno git status
...output of git...

Note that I had to pass the -d switch and to give the path on the host filesystem that will be the current working directory for the command. I can even do Firefox builds from within Inferno:

; cd /home/username/firefox
; os -d /home/username/firefox ./mach build
...firefox building...

Any errors in source files that appear can be shown in the Inferno acme editor as I've mapped /home to be the same on Inferno as on the host.

VNC in Inferno

Sometimes I need to run graphical host commands from within Inferno. Mechiel Lukkien has a large number of useful Inferno programs written in Limbo. One of these is a vnc client. From a source clone of this project it can be installed in Inferno with something like:

; cd vnc
; SYSROOT=Inferno
; ROOT=
; mk install

Start a VNC server from the Linux side:

$ vncpasswd
...set password...
$ vncserver :1

Connect to it from the Inferno side:

; vncv tcp!127.0.0.1!5901

Now you can run Firefox and other useful programs from within Inferno. Inferno has its own web browser, Charon, but it's nice to be able to use a full browser, terminals, etc to do Linux things from within Inferno when needed. vncv isn't feature complete - it doesn't do modifier keys unfortunately, but is still a useful tool.

Mechiel Lukkien has many other useful libraries worth exploring. There is an ssh implementation, a mercurial client, and an interesting 'irc filesystem' which I've written about before.

Other tips

Acme is one of the editors available in Inferno. It's also available on other operating systems and Russ Cox has a Tour of Acme video which explains its features. Most of these work in the Inferno port too.

Pete Elmore has an introduction to the Inferno shell and other Inferno posts. The shell has a way of passing data from one shell command to another. For example, du can recursively list all files in the current directory and subdirectories:

; du -an

To pass this list to grep to find the text 'ASSERT' within these files:

; grep -n ASSERT `{du -an}
filename.h:100: ASSERT(1)

Limbo

Limbo is the programming language used in Inferno. It is a typed programming language with support for concurrency using channels and lightweight threads. The book Inferno Programming with Limbo is available as a PDF and also makes for a good introduction to Inferno itself. The line between Inferno as an OS and Inferno as a language runtime is a bit blurry at times.

Why Inferno?

Inferno has some interesting ideas with regards to distributing computing power and provides a way to explore some ideas that were in the Plan 9 OS but useable on multple platforms. My post on sharing computer and phone resources gives some examples of ways it could be used. Its lightweight threads and concurrency make for useful programming system for server based tasks.

Inferno is a small hackable OS for learning about operating systems. For more on this the book Principles of Operating Systems by Brian Stuart covers Inferno and Linux. One aspect I was interested in exploring was porting parts of the OS from C to the ATS programming language in a similar manner to what I describe in my preventing heartbleed bugs with safer programming languages post.

More on Inferno is available at:

Tags: inferno 

2014-12-22

Self Benchmarking

I was asked on twitter about the current speed of the Self implementation. The request was for a method send benchmark so I wrote a simple one and compared against Pharo, a Smalltalk implementation.

The implementation and technology behind Self is quite old in comparison to modern compiler implementations but at the time it was state of the art. I hoped it would hold up reasonably well. The test I wrote in Self was:

(|
  doSomething = ( ^self ).
  test = ( |n <- 0|
           [ n < 100000000 ] whileTrue: [ doSomething. n: n + 1 ]
         )
|)

Running this in the Self shell shows:

"Self 1" _AddSlots: ...code snippet from above...
shell
"Self 2" [ test ] time.
2587

2.5 seconds seems a bit slow to me but I tested in Pharo to confirm and to see how it compares. The Pharo code looks almost exactly like the Self code:

doSomething = ^self
test = |count|
       count := 0.
       [ count < 100000000 ] whileTrue: [
         count := count + 1.
         doSomething.
       ].

[ MyObject new test ] timeToRun
  => 0:00:00:00.239

That's 239ms vs 2,587ms, a factor of over 10x. Further investigation revealed that calling 'time' in Self seems to cause the code to run slower. If I call the 'test' method first, and then call 'time' then it's much faster:

"Self 2" [ test ] time.
2587
"Self 3" [ test ] time.
2579
"Self 4" test.
nil
"Self 5" [ test ] time.
650
"Self 6" [ test ] time.
628

At 650ms it is about 2.7x slower than Pharo, an improvement over 10x. More investigation is needed to see if there is room for other improvements.

The Self implementation has some primitives that can be changed to show debugging information from the JIT. All primitives can be listed with:

primitives primitiveList do: [ | :e | e printLine ].

Looking through this shows some interesting ones prefixed with _Print that can be set to output debug data. One is _PrintCompiledCode. Setting this to true allows viewing the generated assembler code on the Self console.

"Self 16" _PrintCompiledCode: true.
false
"Self 17" 40 + 2.
...
  // loadOop2
movl $0xa0 (40), #-16(%ebp)
  // loadOop2
movl $0x8 (2), #-20(%ebp)
  // loadArg
movl #-20(%ebp), %ebx
movl %ebx, #4(%esp)
  // selfCall
movl #-16(%ebp), %ebx
movl %ebx, (%esp)
nop
nop
nop
call 0x8186597 <SendMessage_stub> (bp)
  // begin SendDesc
jmp L7f
  .data 3
jmp L9f
  .data 0
  .data 0
  .data 0x4578341 ('+')
  .data 4
L7: 
L8: 
  // end SendDesc
movl %eax, #-16(%ebp)
  // epilogue
movl #-16(%ebp), %eax
  // restore_frame_and_return
leave
ret

Others, like _PrintInlining display debug information related to inlining code.

"Self 18" _PrintInlining: true
fales
"Self 19" test.
*inlining size, cost 0/size 0 (0x8b7e864)
*PIC-type-predicting - (1 maps)
*type-casing -
*inlining - (smallInt.self:153), cost 1/size 0 (0x8b7ee38)*
 *inlining asSmallInteger (number.self:108), cost 1/size 0 (0x8b7fa94)*
  *inlining raiseError, cost 0/size 0 (0x8b80530)*
  *inlining asSmallIntegerIfFail: (smallInt.self:302), cost 0/size 0 (0x8b808fc)*
 *inlining TSubCC:
 *cannot inline value:With:, cost = 10 (rejected)
 *marking value:With: send ReceiverStatic
 *sending value:With:
*sending -
*inlining size:, cost 0/size 0 (0x8b8434c)
*inlining rep, cost 0/size 0 (0x8b846a8)
*PIC-type-predicting removeFirstLink (1 maps)
*type-casing removeFirstLink
*inlining removeFirstLink (list.self:300), cost 2/size 0 (0x8b84b48)*
 *inlining next, cost 0/size 0 (0x8b85628)
 *PIC-type-predicting remove (1 maps)
 *type-casing remove
 *cannot inline remove, cost = 9 (rejected)
 *sending remove
*sending removeFirstLink
*PIC-type-predicting value (1 maps)
*type-casing value
*inlining value, cost 0/size 0 (0x8b86570)*
*sending value
*inlining asSmallInteger (number.self:108), cost 1/size 0 (0x8b7e5b0)*
 *inlining raiseError, cost 0/size 0 (0x8b7f074)*
 *inlining asSmallIntegerIfFail: (smallInt.self:302), cost 0/size 0 (0x8b7f440)*
*inlining TSubCC:
*cannot inline value:With:, cost = 10 (rejected)
*marking value:With: send ReceiverStatic
*sending value:With:
nil

For more involved benchmarks there is some code shipped with the Self source. It can be loaded with:

"Self 28" bootstrap read: 'allTests' From: 'tests'.
reading ./tests/allTests.self...
reading ./tests/tests.self...
reading ./tests/programmingTests.self...
reading ./tests/debugTests.self...
reading ./tests/lowLevelTests.self...
reading ./tests/numberTests.self...
reading ./tests/deltablue.self...
reading ./tests/sicTests.self...
reading ./tests/branchTests.self...
reading ./tests/nicTests.self...
reading ./tests/testSuite.self...
reading ./tests/languageTests.self...
reading ./tests/cons.self...
reading ./tests/benchmarks.self...
reading ./tests/richards.self...
reading ./tests/parser.self...
reading ./tests/parseNodes.self...
modules allTests

There are methods on the bootstrap object for running the tests and printing results. For example:

"Self 32" benchmarks measurePerformance

                 compile    mean       C    mean/C       %
recur:                 5       0
sumTo:                 2       7
sumFromTo:             2       7
fastSumTo:             2       6
nestedLoop:            2      10
...

There is also measurePerformance2 and measurePerformance3 methods. The code comments for the measure2 and measure3 methods explain the differences.

Self 2 was well known for generating very fast code that compared favourably with C. This implementation of this was described in Craig Chamber's thesis. Compilation was slow however so in Self 3 and 4 two new compilers were created. These were 'nic' and 'sic'. I believe this is covered in Urs Holzle's thesis The 'nic' compiler is the 'Non Inlining Compiler' and is simpler to implement. It's the compiler you write to get Self bootstrapped and running on new platforms fairly quickly. There is no inlining and no type feedback so performance is slower as shown by the benchmarking when changing the compiler used, as described below. The 'sic', or 'Single Inlining Compiler', generates better code through more optimisations. While neither is as fast as the Self 2 compiler it is faster to compile code and makes for a better interactive system. You can read more about this in the Merlintec Self FAQ.

There is a defaultCompiler slot in the benchmark object that can be set to nic or sic to compare the different JIT compilers that Self implements. Comparing the 'nic' compiler vs the 'sic' compiler shows a speedup of about 6x in the 'richards' benchmark when using 'sic'.

There's probably a fair bit of low hanging fruit to improve run times. I don't think the x86 backend has had as much work on it as the Sparc or PPC backends. The downside is much of the compiler code is written in C++ so for people interested in 'Self the language' it's not as fun to hack on. Klein was an attempt to write a Self VM in Self and includes a compiler and assembler which might make a more interesting project for those that want to use Self itself to implement compiler code.

Tags: self 


This site is accessable over tor as hidden service 6vp5u25g4izec5c37wv52skvecikld6kysvsivnl6sdg6q7wy25lixad.onion, or Freenet using key:
USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/-61/


Tags

Archives
Links