Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


Firefox Media Source Extensions Update

This is an update on some recent work on the Media Source Extensions API in Firefox. There has been a lot of work done on MSE and the underlying media framework by Gecko developers and this update just covers some of the telemetry and exposed debug data that I’ve been involved with implementing.


Mozilla has a telemetry system to get data on how Firefox behaves in the real world. We’ve added some MSE video stats to telemetry to help identify usage patterns and possible issues.

Bug 1119947 added information on what state an MSE video is in when the video is unloaded. The intent of this is to find out if users are exiting videos due to slow buffering or seeking. The data is available on under the VIDEO_MSE_UNLOAD_STATE category. This has five states:

0 = ended, 1 = paused, 2 = stalled, 3 = seeking, 4 = other

The data provides a count of the number of times a video was unloaded for each state. If a large number of users were exiting during the stalled state then we might have an issue with videos stalling too often. Looking at current stats on beta 37 we see about 3% unloading on stall with 14% on ended and 57% on other. The ‘other’ represents unloading during normal playback.

Bug 1127646 will add additional data to get:

  • Join Latency - time between video load and video playback for autoplay videos
  • Mean Time Between Rebuffering - play time between rebuffering hiccups

This will be useful for determining performance of MSE for sites like YouTube. The bug is going through the review/comment stage and when landed the data will be viewable at

about:media plugin

While developing the Media Source Extensions support in Firefox we found it useful to have a page displaying internal debug data about active MSE videos.

In particular it was good to be able to get a view of what buffered data the MSE JavaSript API had and what our internal Media Source C++ code stored. This helped track down issues involving switching buffers, memory size of resources and other similar things.

The internal data is displayed in an about:media page. Originally the page was hard coded in the browser but :gavin suggested moving it to an addon. The addon is now located at That repository includes the aboutmedia.xpi which can be installed directly in Firefox. Once installed you can go to about:media to view data on any MSE videos.

To test this, visit a video that has MSE support in a nightly build with the about:config preferences media.mediasource.enabled and media.mediasource.mp4.enabled set to true. Let the video play for a short time then visit about:media in another tab. You should see something like:
    currentTime: 101.40625
    SourceBuffer 0
      start=0 end=14.93043
    SourceBuffer 1
      start=0 end=15

    Internal Data:
      Dumping data for reader 7f9d85ef1800:
        Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
          Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
          Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
        Dumping Video Track Decoders - mLastVideoTime: 7.000000
          Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
          Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914

The first portion of the displayed data shows the JS API video of the data buffered:

currentTime: 101.40625
  SourceBuffer 0
    start=0 end=14.93043
  SourceBuffer 1
    start=0 end=15

This shows two SourceBuffer objects. One containing data from 0-14.9 seconds and the other 0-15 seconds. One of these will be video data and the other audio. The currentTime attribute of the video is 101.4 seconds. Since there is no buffered data for this range the video is likely buffering. I captured this data just after seeking while it was waiting for data from the seeked point.

The second portion of the displayed data shows information on the C++ objects implementing media source:

Dumping data for reader 7f9d85ef1800:
  Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
    Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
    Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
  Dumping Video Track Decoders - mLastVideoTime: 7.000000
    Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
    Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914

A reader is an instance of the MediaSourceReader C++ class. That reader holds two SourceBufferDecoder C++ instances. One for audio and the other for video. Looking at the video decoder it has two readers associated with it. These readers are instances of a derived class of MediaDecoderReader which are tasked with the job of reading frames from a particular video format (WebM, MP4, etc).

The two readers each have buffered data ranging from 0-10 seconds and 10-15 seconds. Neither are ‘active’. This means they are not currently the video stream used for playback. This will be because we just started a seek. You can view how buffer switching works by watching which of these become active as the video plays. The size is the amount of data in bytes that the reader is holding in memory. mLastVideoTime is the presentation time of the last processed video frame.

MSE videos will have data evicted as they are played. This size threshold for eviction defaults to 75MB and can be changed with the media.mediasource.eviction_threshold variable in about:config. When data is appended via the appendBuffer method on a SourceBuffer an eviction routine is run. If data greater than the threshold is held then we start removing portions of data held in the readers. This will be noticed in about:media by the start and end ranges being trimmed or readers being removed entirely.

This internal data is most useful for Firefox media developers. If you encounter stalls playing videos or unusual buffer switching behaviour then copy/pasting the data from about:media in a bug report can help with tracking the problem down. If you are developing an MSE player then the information may also be useful to find out why the Firefox implementation may not be behaving how you expect.

The source of the addon is on github and relies on a chrome only debug method, mozDebugReaderData on MediaSource. Patches to improve the data and functionality are welcome.


Media Source Extensions is still in progress in Firefox and can be tested on Nightly, Aurora and Beta builds. The current plan is to enable support limited to YouTube only in Firefox 37 on Windows and Mac OS X for MP4 videos. Other platforms, video formats and wider site usage will be enabled in future versions as the implementation improves.

To track work on the API you can follow the MSE bug in Bugzilla.

Tags: mozilla 


Spawning Windows Commands in Wasp Lisp and MOSREF

It’s been a while since I last wrote about MOSREF and Wasp Lisp. MOSREF is the secure remote injection framework written in Wasp Lisp. It facilitates penetration testing by enabling a console node to spawn drone nodes on different machines. The console handles communication between nodes and can run lisp programs on any node.

The console can execute programs on other nodes with the input and output redirected to the console. One use for this is to create remote shells. MOSREF uses the Wasp Lisp function spawn-command. The implementation for this in Linux is fairly small and simple. On Windows drones it’s somewhat more difficult. It’s not implemented in current Wasp Lisp and attempting to use the sh command in MOSREF or the spawn-command function in Lisp fails with an error.

I’ve been meaning to try implementing this for quite a while and finally got around to it recently. I’m doing the work in the win_spawn branch of my github fork of WaspVM. With that version of Wasp Lisp and MOSREF built with Windows and Linux stubs available you can spawn Windows commands and capture the output:

>> (define a (spawn-command "cmd.exe /c echo hi"))
:: [win32_pipe_connection 5179A0]
>> (wait a)
:: "hi\r\n"
>> (wait a)
:: close

Bidirectional communication works too:

>> (define a (spawn-command "cmd.exe"))
:: [win32_pipe_connection 517770]
>> (wait a)
:: "Microsoft Windows ..."
>> (send "echo hi\n" a)
:: [win32-pipe-output 517748]
>> (wait a)
:: "echo hi\nhi\r\n\r\nE:\l>"
>> (send "exit\n" a)
:: [win32-pipe-output 517748]
>> (wait a)
:: "exit\n"
>> (wait a)
:: close

With that implemented and some minor changes to MOSREF to remove the check for Windows you can interact with remote Windows nodes. I made a quick video demonstrating this. There is no sound but it shows a linux console on the left and a windows shell on the right running in a VM.

I create a Windows drone and copy it to a location the Windows VM can access using the MOSREF cp command. This actually copies from the console where the drone was create to another Linux drone called tpyo. The Windows VM is running on the machine running tpyo and access the drone executable. This is run in the VM to connect to the console.

Once connected I run a few Lisp commands on the Windows node. The lisp is compiled to bytecode on the console, and the bytecode is shipped to the drone where it executes. The result then goes back to the console. This is all normal MOSREF operation and works already, I just do it to ensure things are working correctly.

Next I run a sh command which executes the command in the windows VM with the result sent back to view on the console. Then I do a typo which breaks the connection because of a bug in my code, oops. I recover the drone, reconnect, and run a remote shell like I originally intended. This spawning of commands on Windows is the new code I have implemented.

The video is available as mosref.webm, mosref.mp4 or on YouTube.

The implementation on Windows required a bunch of Win32 specific code. I followed an MSDN article on redirecting child process output and another on spawning console processes. This got the basic functionality working pretty quickly but hooking it into the Wasp Lisp event and stream systems took a bit longer.

Wasp uses libevent for asynchronous network and timer functionality. I couldn’t find a way for this to be compatible with the Win32 HANDLE’s that result from the console spawning code. I ended up writing derived connection, input and output Wasp VM classes for Win32 pipes that used Win32 Asynchronous RPC callbacks to avoid blocking reads. My inspiration for this was the existing Wasp routines to interact with the Win32 console used by the REPL.

A connection is basically a bidirectional stream where you can obtain an input and an output channel. A wait on an input channel receives data and a send on the output channel transmits data. When wait is called on the input channel a callback is invoked which should do the read. This can’t block otherwise all Wasp VM coroutines will stop running. The callback instead sets a Win32 event which notifies a thread to read data and post the result back to the main thread via an asynchronous procedure call. A send on the output channel invokes another callback which does the write. Although this can technically block if the pipe buffers are full I currently call Write directly.

The Wasp VM scheduler has code that checks if there are any active processes running and can do a blocking wait on libevent for notification to prevent spinning a polling loop. This had the side effect of preventing the asynchronous procedure call from running as Windows only executes it at certain control points. I had to insert a check that although our reading process was de-scheduled waiting for the APC, it was in fact still around and needed the event loop to spin so a call to SleepEx occurs for the APC to run.

I’m still working on testing and debugging the implementation but it works pretty well as is. Before I submit a pull request I want to clean up the code a bit and maybe combine some of the duplicate functionality from the console handling code and the pipe code. I also need to check that I’m cleaning up resources correctly, especially the spawned reading/APC handling threads.

Some minor changes were needed to other parts of Wasp Lisp and those commits are in the github repository. They involve environment variable handling on Windows. First I had to enable the support for it, and then change it so on Windows the environment names were all uppercase. This avoided issues with Wasp looking for commands in PATH vs the Path that was set on my machine. On Windows these are case insensitive.

For building a Windows compatible Wasp VM stub and REPL I used a cross compiler on Linux. I used the gcc-mingw-w64 package in Ubuntu for this. Build wasp VM with:

$ ./configure --host=i686-w64-mingw32
$ OS=MINGW32 CC=i686-w64-mingw32-gcc make

This puts the Windows stub in the stubs directory. I copied this to the stubs directory of the node running the console so it could generate Windows drones. I had to build libevent for Windows using the same cross compiler and tweak the Wasp to find it. Removing the -mno-cygwin flag was needed as well. I’ll do patches to have the makefile work for cross compilation without changes if no one else gets to it.

Tags: waspvm 


Decentralized Websites with ZeroNet

ZeroNet is a new project that aims to deliver a decentralized web. It uses a combination of bittorrent, a custom file server and a web based user interface to do this and manages to provide a pretty useable experience.

Users run a ZeroNet node and do their web browsing via the local proxy it provides. Website addresses are public keys, generated using the same algorithm as used for bitcoin addresses. A request for a website key results in the node looking in the bittorrent network for peers that are seeding the site. Peers are selected and ZeroNet connects to the peer directly to a custom file server that it implements. This is used to download the files required for the site. Bittorrent is only used for selecting peers, not for the site contents.

Once a site is retrieved the node then starts acting as a peer serving the sites content to users. The more users browsing your site, the more peers become available to provide the data. If the original site goes down the remaining peers can still serve the content.

Site updates are done by the owner making changes and then signing these changes with the private key for the site address. It then starts getting distributed to the peers that are seeding it.

Browsing is done through a standard web browser. The interface uses Websockets to communicate with the local node and receive real time information about site updates. The interface uses a sandboxed iframe to display websites.


ZeroNet is open source and hosted on github. Everything is done through the one command. To run a node:

$ python

This will start the node and the file server. A check is made to see if the file server is available for connections externally. If this fails it displays a warning but the system still works. You won’t seed sites or get real time notification of site updates however. The fix for this is to open port 15441 in your firewall. ZeroNet can use UPNP to do this automatically but it requires a MiniUPNP binary for this to work. See the --upnpc command line switch for details.

The node can be accessed from a web browser locally using port 43110. Providing a site address as the path will access a particular ZeroNet site. For example, 1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr is the main ‘hello’ site that is first displayed. To access it you’d use the URL

Creating a site

To create a site you first need to shut down your running node (using ctrl+c will do it) then run the siteCreate command:

$ python siteCreate
- Site private key: ...private key...
- Site address: address...
- Site created!

You should record the private key and address as you will need them when updating the site. The command results in a data/address directory being created, where ‘address’ is the site address that siteCreate produced. Inside that is a couple of default files. One of these, content.json, contains JSON data listing the files contained within the site and signing information. This gets updated automatically when you sign your site after doing updates. If you edit the title key in this file you can give your site a title that appears in the user interface instead of the address.

Another flie that gets modified during this site creation process is the sites.json file in the data directory. It contains the list of all the sites and some metadata about them.

If you visit in your browser, where siteaddress is the address created with siteCreate, then you’ll see the default website that is created. If your node is peering successfully and you access this address from another node it will download the site, display it, and start seeding it. This is how the site data spreads through the network.

Updating a site

To change a site you must first store your files in the data/siteaddress directory. Any HTML, CSS, JavaScript, etc can be put here. It’s like a standard website root directory. Just don’t delete the config.json file that’s there. Once you’ve added, modified or removed files you run the siteSign command:

$ python siteSign siteaddress
- Signing site: siteaddress...
Private key (input hidden):

Now you enter the private key that was displayed (and hopefully you saved) when you ran siteCreate. The site gets signed and information stored in config.json. To publish these changes to peers seeding the site:

$ python sitePublish siteaddress
...publishes to peers...

If your node is running it will serve the files from the running instance. If it is not then the sitePublish command will continue running to serve the files.

Deleting a site

You can pause seeding a site from the user interface but you can’t delete it. To do that you must shutdown the node and delete the sites data/siteaddress directory manually. You will also need to remove its entry from data/sites.json. When you restart the node it will no longer appear.

Site tips

Because the website is displayed in a sandboxed iframe there are some restrictions in what it can do. The most obvious is that only relative URLs work in anchor elements. If you click on an absolute URL it does nothing. The sandboxed iframe has the allow-top-navigation option which means you can link to external pages or other ZeroNet sites if you use the target attribute of the anchor element and set it to _top. So this will work:

<a href="" target="_top">click me</a>

But this will not:

<a href="">click me</a>

Dynamic websites are supported, but requires help using centralized services. The ZeroNet node includes an example of a dynamic website called ‘ZeroBoard’. This site allows users to enter a message in a form and it’s published to a list of messages which all peering nodes will see. It does this by posting the message to an external web application that the author runs on the standard internet. This web app updates a file inside the sites ZeroNet directory and then signs it. The result is published to all peers and they automatically get the update through the Websocket interface.

Although this works it’s unfortunate the it relies on a centralized web application. The ZeroNet author has posted that they are looking at decentralized ways of doing this, maybe using bitmessage or some other system. Something involving peer to peer WebRTC would be interesting.


ZeroNet seems to be most similar to tor, i2p or freenet. Compared to these it lacks the anonymity and encryption aspects. But it decentralizes the site content which tor and i2p don’t. Freenet provides decentralization too but does not allow JavaScript in sites. ZeroNet does allow JavaScript but this has the usual security and tracking concerns.

Site addresses are in the same format as bitcoin addresses. It should be possible to import the private key into bitcoin and then bitcoins sent to the public address of a site would be accessed by the site owner. I haven’t tested this but I don’t see why it couldn’t be made to work. Maybe this could be leveraged somehow to enable a web payment method.

ZeroNet’s lack of encyption or obfuscation of the site contents could be a problem. A peer holds the entire site in a local directory. If this contains malicious or illegal content it can be accidentally run or viewed. Or it could be picked up in automated scans and the user held responsible. Even if the site originally had harmless content the site author could push an update out that contains problematic material. That’s a bit scary.

It’s early days for the project and hopefully some of these issues can be addressed. As it is though it works well, is very useable, and is an interesting experiement on decentralizing websites. Some links for more information:

Tags: mozilla  zeronet 


Improving Linux Font Support in Self

In the Linux version of the Self programming language implementation the fonts used are standard X11 fonts. On modern Linux systems these don’t look great and a common question asked in the mailing list is how to improve it. Fonts on the Mac OS X build of Self use a different system and they look much better. It would be good to convert the Linux version to use freetype to gain more control over fonts.

I worked on adding Freetype support a couple of years ago and wrote about it on the mailing list. I haven’t done much on it since then but the code is in my github repository under the linux_fonts branch. That work adds Self primitives to access Freetype but does not integrate it into the Self font system. I hope to be able to continue this work sometime but I’m unlikely to get to it in the near future. This post is to point to the code, show how to use the primitives, in case someone else would like to take it forward as a project.

To try the code out you’ll need to build Self in the usual manner but using that branch. Once the desktop is launched I test it by creating an object with the following slots:

(| parent* = traits oddball.
   window <- desktop w anyWindowCanvas.
   draw <-- nil.
   font <- nil.
   xftcolor <- nil.
   xrcolor <- nil.

With an outliner for this object I create an evaulator for it and, one at a time, evaluate the following code snippets:

draw: window display xftDrawCreate: window platformWindow
  Visual: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen

xftcolor: xlib xftColor new.
xrcolor: xlib xRenderColor new.
xrcolor alpha: 16rffff

window display xftColorAllocValue: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen
  RenderColor: xrcolor
  XftColor: xftcolor

font: window display xftFontOpenNameOnScreen: window display screen number
 Name: 'verdana-18'

draw xftDrawString8: xftcolor
  Font: font X: 100 Y: 100
  String: 'Hello World!'

The results in the text ‘Hello World!’ appearing at position 100@100 on the desktop in the Verdana font. The video below demonstrates this to show how it’s done in the Self user interface:

This is a common workflow I use to prototype things in Self. I create an empty object and populate it with slots to hold data. With an evaluator created for that object these slots are accessible without needing to have an object to call them on. This is known as ‘implicit self calls’. The message for the slot implicitly is sent to the current object. I create and remove slots as needed. I can use the Self outliner to drill down on the slots to look at and manipulate those objects specifically if needed.

Hopefully this Freetype support can be used as a base for better looking fonts on Linux. If you are keen to take it further, or have ideas on how to integrate it, I can be contacted using the details at the bottom of my blog, or you can raise it in the Self mailing list or on github.

Tags: self 


Using Inferno OS on Linux

It’s the end of the year and I’m going through some of the patches I have for projects and putting them online so I have a record of them. I’m working through things I did on Inferno OS.

Source respositories

The official Inferno source is hosted in a mercurial repository on Google Code. There is also a repository containing a snapshot of this with changes to get Inferno building on Android systems on bitbucket. Finally there is a tarball containing a snapshot which includes fonts required by Inferno that are not stored in the mercurial repository.

I carry a couple of custom patches to Inferno and I also have changes to the Android port that I keep in a patch file. When working on these I find it useful to be able to diff between the current Inferno source and the Hellaphone source to see what changes were made.

I’ve cleaned these up and put the patches in a github repository with a conversion of the mercurial repositories to git. The Hellaphone code is a branch of the repository makeing it easy to cherry pick and diff between that and the main source. I’ve put this in

There are four main branches in that repository:

  • 20100120 - Snapshot with the additional fonts
  • hellaphone - Hellaphone Android port
  • inferno - Direct import of the Inferno mercurial repository
  • master - Everything needed for building inferno on Linux. It is a recent working version of the inferno branch with the additional fonts from the 20100120 branch and any additional work in progress patches to work around existing issues.

I use master for running Inferno on desktop and hellaphone for phone development. The README on master explains how to build and the README on hellaphone has the steps to get it running on a phone.

Building on Linux

Building Inferno on Linux with the official source involves downloading an archive containing a snapshot of the source with the addition of some font files that are licensed differently. You then do a mercurial update to get the latest source code. Something like the following performs the build:

$ wget
$ tar xvf inferno-20100120.tgz
$ cd inferno
$ hg pull -u
$ ...edit mkconfig so entries are set below...
$ export PATH=$PATH:`pwd`/Linux/386/bin
$ ./
$ mk nuke
$ mk install

Building using my github repository requires:

$ git clone
$ cd inferno
$ sh Mkdirs
$ .. edit `mkconfig` to so the following entries are set to these values...
$ export PATH=$PATH:`pwd`/Linux/386/bin
$ ./
$ mk nuke
$ mk install

Running Inferno

I use a shell script to run Inferno. The contents look like:

export PATH=$PATH:~/inferno/Linux/386/bin
export EMU="-r/home/$USER/inferno -c1 -g1920x1008"
exec emu $* /dis/sh.dis -a -c "wm/wm wm/logon -u $USER"

This starts Inferno using the same username as on the Linux system and with the desktop size set to 1920x1008. The JIT is enabled (the -c1 flag does this). For this to work I need to have a /usr/chris directory in the Inferno file system with some default files. This can be created by copying the existing /usr/inferno directory:

$ cd inferno
$ cp -r usr/inferno usr/chris

The patches in my master branch add a -F command line switch to the emu command to use full screen in X11. This is useful when using a window manager that doesn’t allow switching to and from full screen. On Ubuntu I often run Inferno full screen in a workspace that I can switch back and forth to get between Linux and Inferno. This can be run with the above shell script (assuming it is named inferno and on the path):

$ inferno -F

Accessing the host filesystem and commands

Often I want to access the host filesystem from within Inferno. This can be done using bind with a special path syntax to represent the host system. I like to keep the path on Inferno the same as the path on Linux so when running host commands the paths in error messages match up. This makes it easier to click on error messages and load the file in Inferno text editors. I map /home as follows (‘;’ represents an Inferno shell prompt):

; mkdir /home
; bind '#U*/home' /home

Now all the directories and files under /home in Linux are available under /home in Inferno.

The Inferno os command is used to run host programs from within Inferno. To run git for example:

; os -d /home/username/inferno git status
...output of git...

Note that I had to pass the -d switch and to give the path on the host filesystem that will be the current working directory for the command. I can even do Firefox builds from within Inferno:

; cd /home/username/firefox
; os -d /home/username/firefox ./mach build
...firefox building...

Any errors in source files that appear can be shown in the Inferno acme editor as I’ve mapped /home to be the same on Inferno as on the host.

VNC in Inferno

Sometimes I need to run graphical host commands from within Inferno. Mechiel Lukkien has a large number of useful Inferno programs written in Limbo. One of these is a vnc client. From a source clone of this project it can be installed in Inferno with something like:

; cd vnc
; SYSROOT=Inferno
; mk install

Start a VNC server from the Linux side:

$ vncpasswd
...set password...
$ vncserver :1

Connect to it from the Inferno side:

; vncv tcp!!5901

Now you can run Firefox and other useful programs from within Inferno. Inferno has its own web browser, Charon, but it’s nice to be able to use a full browser, terminals, etc to do Linux things from within Inferno when needed. vncv isn’t feature complete - it doesn’t do modifier keys unfortunately, but is still a useful tool.

Mechiel Lukkien has many other useful libraries worth exploring. There is an ssh implementation, a mercurial client, and an interesting ‘irc filesystem’ which I’ve written about before.

Other tips

Acme is one of the editors available in Inferno. It’s also available on other operating systems and Russ Cox has a Tour of Acme video which explains its features. Most of these work in the Inferno port too.

Pete Elmore has an introduction to the Inferno shell and other Inferno posts. The shell has a way of passing data from one shell command to another. For example, du can recursively list all files in the current directory and subdirectories:

; du -an

To pass this list to grep to find the text ‘ASSERT’ within these files:

; grep -n ASSERT `{du -an}
filename.h:100: ASSERT(1)


Limbo is the programming language used in Inferno. It is a typed programming language with support for concurrency using channels and lightweight threads. The book Inferno Programming with Limbo is available as a PDF and also makes for a good introduction to Inferno itself. The line between Inferno as an OS and Inferno as a language runtime is a bit blurry at times.

Why Inferno?

Inferno has some interesting ideas with regards to distributing computing power and provides a way to explore some ideas that were in the Plan 9 OS but useable on multple platforms. My post on sharing computer and phone resources gives some examples of ways it could be used. Its lightweight threads and concurrency make for useful programming system for server based tasks.

Inferno is a small hackable OS for learning about operating systems. For more on this the book Principles of Operating Systems by Brian Stuart covers Inferno and Linux. One aspect I was interested in exploring was porting parts of the OS from C to the ATS programming language in a similar manner to what I describe in my preventing heartbleed bugs with safer programming languages post.

More on Inferno is available at:

Tags: inferno 

This site is accessable over tor as hidden service mh7mkfvezts5j6yu.onion, ZeroNet using key 13TSUryi4GhHVQYKoRvRNgok9q8KsMLncq, or Freenet using key: