Bluish Coder: mozilla2024-02-02T22:54:17+13:00http://bluishcoder.co.nz/Bluishcoderadmin@bluishcoder.co.nzContributing to Servo2015-03-24T16:00:00+13:00http://bluishcoder.co.nz/2015/03/24/contributing-to-servo<p><a href="https://github.com/servo/servo">Servo</a> is a web browser engine written in the <a href="http://www.rust-lang.org/">Rust programming language</a>. It is being developed by <a href="https://www.mozilla.org">Mozilla</a>. Servo is open source and the project is developed on github.</p>
<p>I was looking for a small project to do some Rust programming and Servo being written in Rust seemed likely to have tasks that were small enough to do in my spare time yet be useful contributions to the project. This post outlines how I built Servo, found issues to work on, and got them merged.</p>
<h2>Preparing Servo</h2>
<p>The <a href="https://github.com/servo/servo/blob/master/README.md">Servo README</a> has details on the pre-requisites needed. Installing the pre-requisites and cloning the repository on Ubuntu was:</p>
<pre><code>$ sudo apt-get install curl freeglut3-dev \
libfreetype6-dev libgl1-mesa-dri libglib2.0-dev xorg-dev \
msttcorefonts gperf g++ cmake python-virtualenv \
libssl-dev libbz2-dev libosmesa6-dev
...
$ git clone https://github.com/servo/servo
</code></pre>
<h2>Building Rust</h2>
<p>The Rust programming language has been fairly volatile in terms of language and library changes. Servo deals with this by requiring a specific git commit of the Rust compiler to build. The Servo source is periodically updated for new Rust versions. The commit id for Rust that is required to build is stored in the <a href="https://github.com/servo/servo/blob/master/rust-snapshot-hash">rust-snapshot-hash file</a> in the Servo repository.</p>
<p>If the Rust compiler isn't installed already there are two options for building Servo. The first is to build the required version of Rust yourself, as outlined below. The second is to let the Servo build system, <code>mach</code>, download a binary snapshot and use that. If you wish to do the latter, and it may make things easier when starting out, skip this step to build Rust.</p>
<pre><code>$ cat servo/rust-snapshot-hash
d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf/rustc-1.0.0-dev
$ git clone https://github.com/rust-lang/rust
$ cd rust
$ git checkout -b servo d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf
$ ./configure --prefix=/home/myuser/rust
$ make
$ make install
</code></pre>
<p>Note that I configure Rust to be installed in a directory off my home directory. I do this out of preference to enable managing different Rust versions. The build will take a long time and once built you need to add the prefix directories to the <code>PATH</code>:</p>
<pre><code>$ export PATH=$PATH:/home/myuser/rust/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/myuser/rust/lib
</code></pre>
<h2>Building Servo</h2>
<p>There is a configuration file used by the Servo build system to store information on what Rust compiler to use, whether to use a system wide Cargo (Rust package manager) install and various paths. This file, <code>.servobuild</code>, should exist in the root of the Servo source that was cloned. There is a <a href="https://github.com/servo/servo/blob/master/servobuild.example">sample file</a> that can be used as a template. The values I used were:</p>
<pre><code>[tools]
system-rust = true
system-cargo = false
[build]
android = false
debug-mozjs = false
</code></pre>
<p>If you want to use a downloaded binary snapshot of Rust to build Servo you should set the <code>system-rust</code> setting to <code>false</code>. With it set to <code>true</code> as above it will expect to find a Rust of the correct version in the path.</p>
<p>Servo uses the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach">mach command line interface</a> that is used to build Firefox. Once the <code>.servobuild</code> is created then Servo can be built with:</p>
<pre><code>$ ./mach build
</code></pre>
<p>Servo can be run with:</p>
<pre><code>$ ./mach run http://bluishcoder.co.nz
</code></pre>
<p>To run the test suite:</p>
<pre><code>$ ./mach test
</code></pre>
<h2>Finding something to work on</h2>
<p>The <a href="https://github.com/servo/servo/issues">github issue list</a> has three useful <a href="https://github.com/servo/servo/labels">labels</a> for finding work. They are:</p>
<ul>
<li><a href="https://github.com/servo/servo/labels/E-easy">E-easy</a></li>
<li><a href="https://github.com/servo/servo/labels/E-less%20easy">E-less easy</a></li>
<li><a href="https://github.com/servo/servo/labels/E-hard">E-hard</a></li>
</ul>
<p>For my first task I searched for <code>E-easy</code> issues that were not currently assigned (using the <code>C-assigned</code> label). I commented in the issue asking if I could work on it and it was then assigned to me by a Servo maintainer.</p>
<h2>Submitting the Fix</h2>
<p>Fixing the issue involved:</p>
<ul>
<li>Fork the <a href="https://github.com/servo/servo">Servo repository on github</a>.</li>
<li>Clone my fork localling and make the changes required to the source in a branch I created for the issue I was working on.</li>
<li>Commit the changes locally and push them to my fork on github.</li>
<li>Raise a pull request for my branch.</li>
</ul>
<p>Raising the pull request runs a couple of automated actions on the Servo repository. The first is an <a href="https://github.com/servo/servo/pull/5219#issuecomment-80935607">automated response thanking you for the changes</a> followed by <a href="https://github.com/servo/servo/pull/5219#issuecomment-80935612">a link to the external critic review system</a>.</p>
<h2>Reviews</h2>
<p>The Servo project uses the <a href="https://critic.hoppipolla.co.uk/">Critic review tool</a>. This will <a href="https://critic.hoppipolla.co.uk/r/4268">contain data from your pull request</a> and any reviews made by Servo reviewers.</p>
<p>To address reviews I made the required changes and committed them to my local branch as seperate commits using the <code>fixup</code> flag to <code>git commit</code>. This associates the new commit with the original commit that contained the change. It allows easier squashing later.</p>
<pre><code>$ git commit --fixup=<commit id of original commit>
</code></pre>
<p>The changes are then pushed to the github fork and the previously made pull request is automatically updated. The Critic review tool also automatically picks up the change and will associate the fix with the relevant lines in the review.</p>
<p>With some back and forth the changes get approved and a request might be made to squash the commits. If <code>fixup</code> was used to record the review changes then they will be squashed into the correct commits when you rebase:</p>
<pre><code>$ git fetch origin
$ git rebase --autosquash origin/master
</code></pre>
<p>Force pushing this to the fork will result in the pull request being updated. When the reviewer marks this as <code>r+</code> the merge to master will start automatically, along with a build and test runs. If test failures happen these get added to the pull request and the review process starts again. If tests pass and it merges then it will be closed and the task is done.</p>
<p>A full overview of the process is available on the github wiki under <a href="https://github.com/servo/servo/wiki/Github-%26-Critic-PR-handling-101">Github and Critic PR handling 101</a>.</p>
<h2>Conclusion</h2>
<p>The process overhead of committing to Servo is quite low. There are plenty of small tasks that don't require a deep knowledge of Rust. The first task I worked on was <a href="https://github.com/servo/servo/pull/5202">basically a search/replace</a>. The second was more involved, implementing <a href="https://github.com/servo/servo/pull/5219">view-source protocol and text/plain handling</a>. The latter allows the following to work in Servo:</p>
<pre><code>$ ./mach run view-source:http://bluishcoder.co.nz
$ ./mach run http://cd.pn/plainttext.txt
</code></pre>
<p>The main issues I encountered working with Rust and Servo were:</p>
<ul>
<li>Compiling Servo is quite slow. Even changing private functions in a module would result in other modules rebuilding. I assume this is due to cross module inlining.</li>
<li>I'd hoped to get away from intermittent test failures like there are in Gecko but there seems to be the occasional <a href="https://github.com/servo/servo/pull/5219#issuecomment-82340911">intermittent reftest failure</a>.</li>
</ul>
<p>The things I liked:</p>
<ul>
<li>Very helpful Servo maintainers on IRC and in github/review comments.</li>
<li>Typechecking in Rust helped find errors early.</li>
<li>I found it easier comparing Servo code to HTML specifications and following them together than I do in Gecko.</li>
</ul>
<p>I hope to contribute more as time permits.</p>
Firefox Media Source Extensions Update2015-03-03T16:00:00+13:00http://bluishcoder.co.nz/2015/03/03/firefox-media-source-extensions-update<p>This is an update on some recent work on the <a href="http://blog.mjg.im/2014/05/08/testing-media-source-extensions/">Media Source Extensions</a> API in Firefox. There has been a lot of work done on MSE and the underlying media framework by Gecko developers and this update just covers some of the telemetry and exposed debug data that I've been involved with implementing.</p>
<h2>Telemetry</h2>
<p>Mozilla has a <a href="https://wiki.mozilla.org/Telemetry">telemetry system</a> to get data on how Firefox behaves in the real world. We've added some MSE video stats to telemetry to help identify usage patterns and possible issues.</p>
<p><a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1119947">Bug 1119947</a> added information on what state an MSE video is in when the video is unloaded. The intent of this is to find out if users are exiting videos due to slow buffering or seeking. The data is available on <a href="http://telemetry.mozilla.org/">telemetry.mozilla.org</a> under the <code>VIDEO_MSE_UNLOAD_STATE</code> category. This has five states:</p>
<p>0 = ended, 1 = paused, 2 = stalled, 3 = seeking, 4 = other</p>
<p>The data provides a count of the number of times a video was unloaded for each state. If a large number of users were exiting during the <code>stalled</code> state then we might have an issue with videos stalling too often. Looking at current stats on <code>beta 37</code> we see about 3% unloading on stall with 14% on ended and 57% on other. The 'other' represents unloading during normal playback.</p>
<p><a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1127646">Bug 1127646</a> will add additional data to get:</p>
<ul>
<li>Join Latency - time between video load and video playback for autoplay videos</li>
<li>Mean Time Between Rebuffering - play time between rebuffering hiccups</li>
</ul>
<p>This will be useful for determining performance of MSE for sites like YouTube. The bug is going through the review/comment stage and when landed the data will be viewable at <a href="http://telemetry.mozilla.org">telemetry.mozilla.org</a>.</p>
<h2>about:media plugin</h2>
<p>While developing the Media Source Extensions support in Firefox we found it useful to have a page displaying internal debug data about active MSE videos.</p>
<p>In particular it was good to be able to get a view of what buffered data the <a href="http://www.w3.org/TR/media-source/">MSE JavaSript API</a> had and what our <a href="https://github.com/mozilla/gecko-dev/tree/master/dom/media/mediasource">internal Media Source C++ code</a> stored. This helped track down issues involving switching buffers, memory size of resources and other similar things.</p>
<p>The internal data is displayed in an <code>about:media</code> page. Originally the page was <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1112424">hard coded in the browser</a> but <a href="https://bugzilla.mozilla.org/user_profile?login=gavin.sharp%40gmail.com">:gavin</a> suggested moving it to an addon. The addon is now located at <a href="https://github.com/doublec/aboutmedia">https://github.com/doublec/aboutmedia</a>. That repository includes the <a href="https://github.com/doublec/aboutmedia/blob/master/aboutmedia.xpi?raw=true">aboutmedia.xpi</a> which can be installed directly in Firefox. Once installed you can go to <code>about:media</code> to view data on any MSE videos.</p>
<p>To test this, visit <a href="https://www.youtube.com/watch?v=3V7wWemZ_cs">a video that has MSE support</a> in a nightly build with the <code>about:config</code> preferences <code>media.mediasource.enabled</code> and <code>media.mediasource.mp4.enabled</code> set to <code>true</code>. Let the video play for a short time then visit <code>about:media</code> in another tab. You should see something like:</p>
<pre><code>https://www.youtube.com/watch?v=3V7wWemZ_cs
mediasource:https://www.youtube.com/6b23ac42-19ff-4165-8c04-422970b3d0fb
currentTime: 101.40625
SourceBuffer 0
start=0 end=14.93043
SourceBuffer 1
start=0 end=15
Internal Data:
Dumping data for reader 7f9d85ef1800:
Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
Dumping Video Track Decoders - mLastVideoTime: 7.000000
Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914
</code></pre>
<p>The first portion of the displayed data shows the JS API video of the data buffered:</p>
<pre><code>currentTime: 101.40625
SourceBuffer 0
start=0 end=14.93043
SourceBuffer 1
start=0 end=15
</code></pre>
<p>This shows two <a href="http://www.w3.org/TR/media-source/#sourcebuffer">SourceBuffer</a> objects. One containing data from 0-14.9 seconds and the other 0-15 seconds. One of these will be video data and the other audio. The currentTime attribute of the video is 101.4 seconds. Since there is no buffered data for this range the video is likely buffering. I captured this data just after seeking while it was waiting for data from the seeked point.</p>
<p>The second portion of the displayed data shows information on the C++ objects implementing media source:</p>
<pre><code>Dumping data for reader 7f9d85ef1800:
Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
Dumping Video Track Decoders - mLastVideoTime: 7.000000
Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914
</code></pre>
<p>A <code>reader</code> is an instance of the <a href="https://github.com/mozilla/gecko-dev/blob/ba77f2e511f00b311c76d4875968182ba307a7ea/dom/media/mediasource/MediaSourceReader.h">MediaSourceReader</a> C++ class. That reader holds two <a href="https://github.com/mozilla/gecko-dev/blob/ba77f2e511f00b311c76d4875968182ba307a7ea/dom/media/mediasource/SourceBufferDecoder.h">SourceBufferDecoder</a> C++ instances. One for audio and the other for video. Looking at the video decoder it has two readers associated with it. These readers are instances of a derived class of <a href="https://github.com/mozilla/gecko-dev/blob/ba77f2e511f00b311c76d4875968182ba307a7ea/dom/media/MediaDecoderReader.h">MediaDecoderReader</a> which are tasked with the job of reading frames from a particular video format (WebM, MP4, etc).</p>
<p>The two readers each have buffered data ranging from 0-10 seconds and 10-15 seconds. Neither are 'active'. This means they are not currently the video stream used for playback. This will be because we just started a seek. You can view how buffer switching works by watching which of these become <code>active</code> as the video plays. The <code>size</code> is the amount of data in bytes that the reader is holding in memory. <code>mLastVideoTime</code> is the presentation time of the last processed video frame.</p>
<p>MSE videos will have data evicted as they are played. This size threshold for eviction defaults to 75MB and can be changed with the <code>media.mediasource.eviction_threshold</code> variable in <code>about:config</code>. When data is appended via the <code>appendBuffer</code> method on a <code>SourceBuffer</code> an eviction routine is run. If data greater than the threshold is held then we start removing portions of data held in the readers. This will be noticed in <code>about:media</code> by the start and end ranges being trimmed or readers being removed entirely.</p>
<p>This internal data is most useful for Firefox media developers. If you encounter stalls playing videos or unusual buffer switching behaviour then copy/pasting the data from <code>about:media</code> in a <a href="https://bugzilla.mozilla.org/">bug report</a> can help with tracking the problem down. If you are developing an MSE player then the information may also be useful to find out why the Firefox implementation may not be behaving how you expect.</p>
<p>The source of the addon is <a href="https://github.com/doublec/aboutmedia">on github</a> and relies on a chrome only debug method, <code>mozDebugReaderData</code> on <a href="https://github.com/mozilla/gecko-dev/blob/ba77f2e511f00b311c76d4875968182ba307a7ea/dom/webidl/MediaSource.webidl">MediaSource</a>. Patches to improve the data and functionality are welcome.</p>
<h2>Status</h2>
<p>Media Source Extensions is still in progress in Firefox and can be tested on Nightly, Aurora and Beta builds. The current plan is to enable support limited to YouTube only in Firefox 37 on Windows and Mac OS X for MP4 videos. Other platforms, video formats and wider site usage will be enabled in future versions as the implementation improves.</p>
<p>To track work on the API you can follow the <a href="https://bugzilla.mozilla.org/showdependencytree.cgi?id=778617&hide_resolved=1">MSE bug in Bugzilla</a>.</p>
Decentralized Websites with ZeroNet2015-01-15T21:00:00+13:00http://bluishcoder.co.nz/2015/01/15/decentralized-websites-with-zeronet<p><a href="https://github.com/HelloZeroNet/ZeroNet" target="_top">ZeroNet</a> is a new project that aims to deliver a decentralized web. It uses a combination of bittorrent, a custom file server and a web based user interface to do this and manages to provide a pretty useable experience.</p>
<p>Users run a ZeroNet node and do their web browsing via the local proxy it provides. Website addresses are public keys, generated using the same algorithm as used for bitcoin addresses. A request for a website key results in the node looking in the bittorrent network for peers that are seeding the site. Peers are selected and ZeroNet connects to the peer directly to a custom file server that it implements. This is used to download the files required for the site. Bittorrent is only used for selecting peers, not for the site contents.</p>
<p>Once a site is retrieved the node then starts acting as a peer serving the sites content to users. The more users browsing your site, the more peers become available to provide the data. If the original site goes down the remaining peers can still serve the content.</p>
<p>Site updates are done by the owner making changes and then signing these changes with the private key for the site address. It then starts getting distributed to the peers that are seeding it.</p>
<p>Browsing is done through a standard web browser. The interface uses Websockets to communicate with the local node and receive real time information about site updates. The interface uses a sandboxed iframe to display websites.</p>
<h2>Running</h2>
<p>ZeroNet is open source and hosted on github. Everything is done through the one <code>zeronet.py</code> command. To run a node:</p>
<pre><code>$ python zeronet.py
...output...
</code></pre>
<p>This will start the node and the file server. A check is made to see if the file server is available for connections externally. If this fails it displays a warning but the system still works. You won't seed sites or get real time notification of site updates however. The fix for this is to open port <code>15441</code> in your firewall. ZeroNet can use UPNP to do this automatically but it requires a <a href="http://miniupnp.free.fr/" target="_top">MiniUPNP</a> binary for this to work. See the <code>--upnpc</code> command line switch for details.</p>
<p>The node can be accessed from a web browser locally using port <code>43110</code>. Providing a site address as the path will access a particular ZeroNet site. For example, <code>1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr</code> is the main 'hello' site that is first displayed. To access it you'd use the URL <code>http://127.0.0.1:43110/1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr</code>.</p>
<h2>Creating a site</h2>
<p>To create a site you first need to shut down your running node (using <code>ctrl+c</code> will do it) then run the <code>siteCreate</code> command:</p>
<pre><code>$ python zeronet.py siteCreate
...
- Site private key: ...private key...
- Site address: ...site address...
...
- Site created!
</code></pre>
<p>You should record the private key and address as you will need them when updating the site. The command results in a <code>data/address</code> directory being created, where 'address' is the site address that <code>siteCreate</code> produced. Inside that is a couple of default files. One of these, <code>content.json</code>, contains JSON data listing the files contained within the site and signing information. This gets updated automatically when you sign your site after doing updates. If you edit the <code>title</code> key in this file you can give your site a title that appears in the user interface instead of the address.</p>
<p>Another flie that gets modified during this site creation process is the <code>sites.json</code> file in the <code>data</code> directory. It contains the list of all the sites and some metadata about them.</p>
<p>If you visit <code>http://127.0.0.1:43110/siteaddress</code> in your browser, where <code>siteaddress</code> is the address created with <code>siteCreate</code>, then you'll see the default website that is created. If your node is peering successfully and you access this address from another node it will download the site, display it, and start seeding it. This is how the site data spreads through the network.</p>
<h2>Updating a site</h2>
<p>To change a site you must first store your files in the <code>data/siteaddress</code> directory. Any HTML, CSS, JavaScript, etc can be put here. It's like a standard website root directory. Just don't delete the <code>config.json</code> file that's there. Once you've added, modified or removed files you run the <code>siteSign</code> command:</p>
<pre><code>$ python zeronet.py siteSign siteaddress
- Signing site: siteaddress...
Private key (input hidden):
</code></pre>
<p>Now you enter the private key that was displayed (and hopefully you saved) when you ran <code>siteCreate</code>. The site gets signed and information stored in <code>config.json</code>. To publish these changes to peers seeding the site:</p>
<pre><code>$ python zeronet.py sitePublish siteaddress
...publishes to peers...
</code></pre>
<p>If your node is running it will serve the files from the running instance. If it is not then the <code>sitePublish</code> command will continue running to serve the files.</p>
<h2>Deleting a site</h2>
<p>You can pause seeding a site from the user interface but you can't delete it. To do that you must shutdown the node and delete the sites <code>data/siteaddress</code> directory manually. You will also need to remove its entry from <code>data/sites.json</code>. When you restart the node it will no longer appear.</p>
<h2>Site tips</h2>
<p>Because the website is displayed in a sandboxed iframe there are some restrictions in what it can do. The most obvious is that only relative URLs work in anchor elements. If you click on an absolute URL it does nothing. The sandboxed iframe has the <code>allow-top-navigation</code> option which means you can link to external pages or other ZeroNet sites if you use the <code>target</code> attribute of the anchor element and set it to <code>_top</code>. So this will work:</p>
<pre><code><a href="http://bluishcoder.co.nz/" target="_top">click me</a>
</code></pre>
<p>But this will not:</p>
<pre><code><a href="http://bluishcoder.co.nz/">click me</a>
</code></pre>
<p>Dynamic websites are supported, but requires help using centralized services. The ZeroNet node includes an example of a dynamic website called 'ZeroBoard'. This site allows users to enter a message in a form and it's published to a list of messages which all peering nodes will see. It does this by posting the message to an external web application that the author runs on the standard internet. This web app updates a file inside the sites ZeroNet directory and then signs it. The result is published to all peers and they automatically get the update through the Websocket interface.</p>
<p>Although this works it's unfortunate the it relies on a centralized web application. The ZeroNet author has posted that they are looking at decentralized ways of doing this, maybe using <a href="https://bitmessage.org" target="_top">bitmessage</a> or some other system. Something involving <a href="http://peerjs.com/" target="_top">peer to peer WebRTC</a> would be interesting.</p>
<h2>Conclusion</h2>
<p>ZeroNet seems to be most similar to <a href="https://www.torproject.org" target="_top">tor</a>, <a href="https://geti2p.net/en/" target="_top">i2p</a> or <a href="http://bluishcoder.co.nz/2014/12/18/using-freenet.html">freenet</a>. Compared to these it lacks the anonymity and encryption aspects. But it decentralizes the site content which tor and i2p don't. Freenet provides decentralization too but does not allow JavaScript in sites. ZeroNet does allow JavaScript but this has the usual security and tracking concerns.</p>
<p>Site addresses are in the same format as bitcoin addresses. It should be possible to import the private key into bitcoin and then bitcoins sent to the public address of a site would be accessed by the site owner. I haven't tested this but I don't see why it couldn't be made to work. Maybe this could be leveraged somehow to enable a web payment method.</p>
<p>ZeroNet's lack of encyption or obfuscation of the site contents could be a problem. A peer holds the entire site in a local directory. If this contains malicious or illegal content it can be accidentally run or viewed. Or it could be picked up in automated scans and the user held responsible. Even if the site originally had harmless content the site author could push an update out that contains problematic material. That's a bit scary.</p>
<p>It's early days for the project and hopefully some of these issues can be addressed. As it is though it works well, is very useable, and is an interesting experiement on decentralizing websites. Some links for more information:</p>
<ul>
<li><a href="https://github.com/HelloZeroNet/ZeroNet" target="_top">ZeroNet on github</a></li>
<li><a href="http://www.reddit.com/r/Bitcoin/comments/2s72uy/zeronet_decentralized_websites_using_bitcoin/" target="_top">Original reddit announcement</a></li>
<li><a href="http://www.reddit.com/r/zeronet" target="_top">ZeroNet subreddit</a></li>
<li><a href="http://127.0.0.1:43110/18qigy8XcrxLpK7QaS52FfjwN2gjqHE231" target="_top">ZeroNet site containing list of other sites</a></li>
<li><a href="http://127.0.0.1:43110/13TSUryi4GhHVQYKoRvRNgok9q8KsMLncq" target="_top">This site, Bluishcoder, hosted on ZeroNet</a></li>
<li><a href="http://127.0.0.1:43110/1Jr5bnqSnnp94CfC7xrqPh4yYYDRkpzozD" target="_top">My Pitcairn Island photos, hosted on ZeroNet</a></li>
</ul>
Update on Tor on Firefox Proof of Concept2014-06-13T18:00:00+12:00http://bluishcoder.co.nz/2014/06/13/update-to-tor-on-firefox-proof-of-concept<p>Yesterday I <a href="http://bluishcoder.co.nz/2014/06/12/using-tor-with-firefox-os.html">wrote about Tor on Firefox OS</a>. Further testing showed an issue when switching networks - a common thing to happen when carrying a mobile device. The <code>iptables</code> rule I was using didn't exclude the <code>tor</code> process itself from having traffic redirected. When a network switch occurred <code>tor</code> would attempt to reestablish connections and this would fail.</p>
<p>A fix for this is to exclude <code>tor</code> from the <code>iptables</code> rules or to use rules for specific processes only. The processes that belong to an Firefox OS application be be viewed with <code>b2g-ps</code>:</p>
<pre><code>APPLICATION SEC USER PID PPID VSIZE RSS NAME
b2g 0 root 181 1 494584 135544 /system/b2g/b2g
(Nuwa) 0 root 830 181 55052 20420 /system/b2g/plugin-container
Built-in Keyboa 2 u0_a912 912 830 67660 26048 /system/b2g/plugin-container
Vertical 2 u0_a1088 1088 830 103336 34428 /system/b2g/plugin-container
Usage 2 u0_a4478 4478 830 65544 23584 /system/b2g/plugin-container
Browser 2 u0_a26328 26328 830 75680 21164 /system/b2g/plugin-container
Settings 2 u0_a27897 27897 830 79840 28044 /system/b2g/plugin-container
(Preallocated a 2 u0_a28176 28176 830 62316 18556 /system/b2g/plugin-container
</code></pre>
<p>Unfortunately the <code>iptables</code> that ships with Firefox OS doesn't seem to support the <code>--pid-owner</code> option for rule selection so I can't select specifically the <code>tor</code> or application processes. I can however select based on <code>user</code> or <code>group</code>. Each application gets their own <code>user</code> so the option to redirect traffic for applications can use that. I wasn't able to get this working reliably though so I switched to targeting the <code>tor</code> process itself.</p>
<p>In my writeup I ran <code>tor</code> as root. I need to run as a different user so that I can use <code>--uid-owner</code> on <code>iptables</code>. Firefox OS inherits the Android method of users and groups where specific users are hardcoded into the system. Since this is a proof of concept and I want to get things working quickly I decided to pick an existing user, <code>system</code>, and run <code>tor</code> as that. By setting the <code>User</code> option in the Tor configuration file I can have Tor switch to that user at run time. Nothing is ever that easy though as user does not have permission to do the many things that <code>tor</code> requires. It can't create sockets for example.</p>
<p>Enter <a href="https://www.kernel.org/pub/linux/libs/security/linux-privs/kernel-2.2/capfaq-0.2.txt">Linux capabilities</a>. It is possible to grant a process certain capabilities which give it the right to perform priviledged actions without being a superuser. There is an existing <a href="https://trac.torproject.org/projects/tor/ticket/8195">Tor trac ticket</a> about this and I used the <a href="https://trac.torproject.org/projects/tor/attachment/ticket/8195/captest.c">sample code in that ticket</a> to modify <code>tor</code> to keep the required capabilities when it switches user, I put the code I cobbled together to patch <code>tor</code> in <a href="http://bluishcoder.co.nz/b2g/tor.patch">tor.patch</a>.</p>
<p>To use this change the <code>Building tor</code> section of my <a href="http://bluishcoder.co.nz/2014/06/12/using-tor-with-firefox-os.html">original post</a> to use these commands:</p>
<pre><code>$ cd $HOME/build
$ wget https://www.torproject.org/dist/tor-0.2.4.22.tar.gz
$ cd tor-0.2.4.22
$ curl http://bluishcoder.co.nz/b2g/tor.patch | patch -p1
$ ./configure --host=arm-linux-androideabi \
--prefix=$HOME/build/install \
--enable-static-libevent
$ make
$ make install
</code></pre>
<p>Change the Tor configuration file to switch the user to <code>system</code> in the <code>Packaging Tor for the device</code> section:</p>
<pre><code>DataDirectory /data/local/tor/tmp
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
SOCKSPort 127.0.0.1:9063
RunAsDaemon 1
Log notice file /data/local/tor/tmp/tor.log
VirtualAddrNetwork 10.192.0.0/10
AutomapHostsOnResolve 1
TransPort 9040
DNSPort 9053
User system
</code></pre>
<p>I've also changed the location of the data files to be in a <code>tmp</code> directory which needs to be given the <code>system</code> user owner. Change the steps in <code>Running tor</code> to:</p>
<pre><code>$ adb shell
# cd /data/local/tor
# mkdir tmp
# chown system:system tmp
# ./tor -f torrc &
# iptables -t nat -A OUTPUT ! -o lo
-m owner ! --uid-owner system \
-p udp --dport 53 -j REDIRECT --to-ports 9053
# iptables -t nat -A OUTPUT ! -o lo \
-m owner ! --uid-owner system \
-p tcp -j REDIRECT --to-ports 9040
</code></pre>
<p>Now tor should work in the presence of network switching. I've updated the <a href="http://bluishcoder.co.nz/b2g/b2g_tor.tar.gz">b2g_tor.tar.gz</a> to include the new <code>tor</code> binary, the updated configuration file, and a couple of shell scripts that will run the <code>iptables</code> commands to redirect traffic to <code>tor</code> and to cancel the redirection.</p>
<p>As before the standard disclaimer applies:</p>
<blockquote><p>All files and modifications described and provided here are at your own risk. This is a proof of concept. Don't tinker on devices you depend on and don't want to risk losing data. These changes are not an official Mozilla project and do not represent any future plans for Mozilla projects.</p></blockquote>
<p>This is probably as far as I'll take things for now with this proof of concept and see what happens from here after using it for a while.</p>
Using Tor with Firefox OS2014-06-12T18:00:00+12:00http://bluishcoder.co.nz/2014/06/12/using-tor-with-firefox-os<p><em>Update</em> - Please read my <a href="http://bluishcoder.co.nz/2014/06/13/update-to-tor-on-firefox-proof-of-concept.html">followup post</a> for some additional information and updated steps on building and installing <code>tor</code> on Firefox OS.</p>
<p>Please read the disclaimer at the end of this article. This is a proof of concept. It's a manual process and you shouldn't depend on it. Make sure you understand what you are doing.</p>
<p>I'm a fan of <a href="https://www.torproject.org/">Tor</a>. The Tor site explains what it does:</p>
<blockquote><p>Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.</p></blockquote>
<p>I make my personal website available as a Tor hidden service accessible from <a href="http://mh7mkfvezts5j6yu.onion/">mh7mkfvezts5j6yu.onion</a>. I try to make other sites I'm involved with also have a presence over Tor. I do a fair amount of my browsing over the Tor network for no reason other than I can and it limits the opportunity for people snooping on my data.</p>
<p>I want to be able to use Tor from Firefox OS. In particular I want it embedded as low level as possible so I have the option of all traffic going over Tor. I don't want to have to configure socks proxies.</p>
<p>Firefox OS doesn't allow native applications. The low level underlying system however is based on Linux and Android and can run native binaries. Starting with a rooted Firefox OS install I built Tor and used <code>iptables</code> to reroute all network traffic to work over it. This is a first step and is what this article demonstrates how to get going so power users can try it out. My next step would be to investigate integrating it into the build system of Firefox OS and providing ways to start/stop it from the OS interface.</p>
<p>The first stage of building is to have an <a href="http://www.kandroid.org/ndk/docs/STANDALONE-TOOLCHAIN.html">Android standalone toolchain</a> installed. I describe how to do this in my <a href="http://bluishcoder.co.nz/2013/05/09/building-wasp-lisp-and-mosref-for-android.html">Wasp Lisp on Android</a> post or you can use <a href="http://bluishcoder.co.nz/nixos/standalone-ndk/default.nix">a Nix package</a> I created for use with the <a href="http://nixos.org/nix">Nix package manager</a>.</p>
<h2>Building libevent</h2>
<p>Tor requires <a href="http://libevent.org/">libevent</a> to build. I'm using static libraries to make creating a standalone <code>tor</code> binary easier. The following will build <code>libevent</code> given the standalone toolchain on your path:</p>
<pre><code>$ cd $HOME
$ mkdir build
$ cd build
$ wget https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz
$ tar xvf libevent-2.0.21-stable.tar.gz
$ cd libevent-2.0.21-stable
$ ./configure --host=arm-linux-androideabi \
--prefix=$HOME/build/install \
--enable-static --disable-shared
$ make
$ make install
</code></pre>
<h2>Building zlib</h2>
<p>Tor requires <a href="http://www.openssl.org/">openssl</a> which in turn requires <a href="http://zlib.net/">zlib</a>:</p>
<pre><code>$ cd $HOME/build
$ wget http://zlib.net/zlib-1.2.8.tar.gz
$ tar xvf zlib-1.2.8.tar.gz
$ cd zlib-1.2.8
$ CC=arm-linux-androideabi-gcc ./configure --prefix=$HOME/build/install --static
$ make
$ make install
</code></pre>
<h2>Building openssl</h2>
<pre><code>$ cd $HOME/build
$ wget http://www.openssl.org/source/openssl-1.0.1h.tar.gz
$ tar xvf openssl-1.0.1h.tar.gz
$ cd openssl-1.0.1h
$ CC=arm-linux-androideabi-gcc ./Configure android no-shared --prefix=$HOME/build/install
$ make
$ make install
</code></pre>
<h2>Building tor</h2>
<pre><code>$ cd $HOME/build
$ wget https://www.torproject.org/dist/tor-0.2.4.22.tar.gz
$ cd tor-0.2.4.22
$ ./configure --host=arm-linux-androideabi \
--prefix=$HOME/build/install \
--enable-static-libevent
$ make
$ make install
</code></pre>
<h2>Packaging Tor for the device</h2>
<p>To run on the Firefox OS device I just installed the <code>tor</code> binary and a configuration file that enables transaparent proxing as per the <a href="https://trac.torproject.org/projects/tor/wiki/doc/TransparentProxy">Tor documentation</a> on the subject. I put these in a directory that I push to an easily accessible place on the device:</p>
<pre><code>$ mkdir $HOME/build/device
$ cd $HOME/build/device
$ cp $HOME/build/install/bin/tor .
$ cat >torrc
...contents of configuration file...
$ adb push $HOME/build/device /data/local/tor
</code></pre>
<p>The configuration file is:</p>
<pre><code>DataDirectory /data/local/tor
Log notice file /data/local/tor/tor.log
RunAsDaemon 1
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
SOCKSPort 127.0.0.1:9063
VirtualAddrNetwork 10.192.0.0/10
AutomapHostsOnResolve 1
TransPort 9040
DNSPort 9053
</code></pre>
<h2>Running tor</h2>
<p>I haven't integrated <code>tor</code> into the device at all so for this proof of concept I <code>adb shell</code> into it to run it and configure the <code>iptables</code> to redirect traffic:</p>
<pre><code>$ adb shell
# cd /data/local/tor
# ./tor -f torrc &
# iptables -t nat -A OUTPUT ! -o lo -p udp --dport 53 -j REDIRECT --to-ports 9053
# iptables -t nat -A OUTPUT ! -o lo -p tcp -j REDIRECT --to-ports 9040
</code></pre>
<h2>Testing</h2>
<p>The device should now be sending traffic over Tor. You can test by visiting sites like <a href="http://whatismyip.com">whatismyip.com</a> or <a href="http://icanhazip.com">icanhazip.com</a> to see if it reports a different IP address and location to what you normally have. You can also try out hidden services like <a href="http://mh7mkfvezts5j6yu.onion/">mh7mkfvezts5j6yu.onion</a> which should show this site.</p>
<h2>Removing</h2>
<p>Killing the Tor process and removing the <code>iptables</code> entries will set the network back to normal:</p>
<pre><code>$ adb shell ps|grep tor
$ adb shell
# kill ...process id of tor...
# iptables -t nat -F
</code></pre>
<p>You can optionally delete the <code>/data/local/tor</code> directory to remove all tor files:</p>
<pre><code>$ adb shell rm -r /data/local/tor
</code></pre>
<h2>Future</h2>
<p>This is just a proof of concept. Don't depend on this. You need to restart Tor and the <code>iptables</code> commands on reboot. I'm not sure how well interaction with switching to/from WiFi and GSM works. Ideally Tor would be integrated with Firefox OS so that you can start and stop it as a service and maybe whitelist or blacklist sites that should and shouldn't use Tor. I hope to do some of this over time or hope someone else gets excited enough to work on it too.</p>
<p>Another privacy aspect I'd like to investigate is whether TextSecure (or a similar service) could be <a href="http://threatpost.com/inside-the-textsecure-cyanogenmod-integration">integrated in the way it's done in CyanogenMod</a>:</p>
<blockquote><p>"The result is a system where a CyanogenMod user can choose to use any SMS app they'd like, and their communication with other CyanogenMod or TextSecure users will be transparently encrypted end-to-end over the data channel without requiring them to modify their work flow at all."</p></blockquote>
<p> Ideally my end goal would be to have something close to that described in the <a href="https://blog.torproject.org/blog/mission-impossible-hardening-android-security-and-privacy">hardening Android post</a> on the Tor Project blog.</p>
<p>I'm not sure how possible that is though. But Firefox OS is open source, easy to build and hack on, and runs on a lot of devices, including <a href="http://bluishcoder.co.nz/2014/06/11/dual-booting-android-and-firefox-os.html">multi booting on some</a>. Adding things like this to build your own custom phone OS that runs web applications is one of the great things the project enables. Users should feel like they can dive in and try things rather than wait for an OS release to support it (in my opinion of course).</p>
<h2>Test Builds</h2>
<p>A tar file containing a precompiled <code>tor</code> and the <code>torrc</code> is available at <a href="http://bluishcoder.co.nz/b2g/b2g_tor.tar.gz">b2g_tor.tar.gz</a>.</p>
<h2>Disclaimer</h2>
<p>All files and modifications described and provided here are at your own risk. Don't tinker on devices you depend on and don't want to risk losing data. These changes are not an official Mozilla project and do not represent any future plans for Mozilla projects.</p>
Dual Booting Android and Firefox OS on the Nexus 52014-06-11T18:00:00+12:00http://bluishcoder.co.nz/2014/06/11/dual-booting-android-and-firefox-os<p>I've gone through periods of using a <a href="http://www.mozilla.org/en-US/firefox/os/">Firefox OS</a> phone as my main device but I've usually fallen back to Android due to needing to use some Android only programs and I don't like carrying two phones around. Today I decided to investigate how to get dual boot Android with custom Firefox OS builds. Thankfully it was actually pretty easy.</p>
<p>The boot manager I used to get this to work is <a href="https://play.google.com/store/apps/details?id=com.tassadar.multirommgr">MultiROM Manager</a>, available from the Play store for rooted phones. The source is for MultiROM Manager is <a href="https://github.com/Tasssadar/multirom/">available on github</a>. The phone I used was the Nexus 5. The instructions here assume you are familiar with <code>adb</code> and <code>fastboot</code> already.</p>
<p><em>Be aware that all these changes may lose the data you have on the device if you haven't already unlocked the boot loader and rooted the device.</em></p>
<h2>Make a backup of your Android settings and applications</h2>
<p>With the device plugged in and visible from <code>adb</code>:</p>
<pre><code>$ adb backup -apk -shared -all
</code></pre>
<p>This can be restored later if needed with:</p>
<pre><code>$ adb restore backup.ab
</code></pre>
<h2>Unlock the bootloader</h2>
<p>The Nexus 5, and other Google devices, make it easy to unlock the bootloader. With the device plugged in and visible from <code>adb</code>:</p>
<pre><code>$ adb reboot bootloader
$ fastboot oem unlock
</code></pre>
<p>Follow the screen instructions. This will erase everything on the device!</p>
<h2>Rooting the Nexus 5</h2>
<p>I used <a href="http://cfautoroot.com/">CF-Auto-Root</a>. I downloaded the <a href="http://www.devfiles.co/download/dEHWyjmo/CF-Auto-Root-hammerhead-hammerhead-nexus5.zip">version for the Nexus 5</a> and used <code>fastboot</code> to boot the image inside of it:</p>
<pre><code>$ unzip CF-Auto-Root-hammerhead-hammerhead-nexus5.zip
$ fastboot boot image/CF-Auto-Root-hammerhead-hammerhead-nexus5.img
</code></pre>
<p>The device will reboot and perform the steps necessary to root it.</p>
<h2>Install MultiROM Manager</h2>
<p>Install <a href="https://play.google.com/store/apps/details?id=com.tassadar.multirommgr">MultiROM Manager</a> from the Play store. Run the app and choose <code>Install</code> after ticking the <code>MultiROM</code>, <code>Recovery</code> and <code>Kernel</code> check boxes. Follow the onscreen instructions.</p>
<h2>Build Firefox OS</h2>
<p>The Mozilla Developer Network has <a href="https://developer.mozilla.org/en-US/Firefox_OS/Building_and_installing_Firefox_OS">instructions for building Firefox OS</a>. Assuming all the pre-requisites are installed the steps are:</p>
<pre><code>$ git clone git://github.com/mozilla-b2g/B2G b2g
$ cd b2g
$ ./config.sh nexus-5
$ PRODUCTION=1 MOZILLA_OFFICIAL=1 ./build.sh
</code></pre>
<p>Don't flash the device from here. We'll create a MultiROM compatible ROM file to boot from.</p>
<h2>Create Firefox OS ROM file</h2>
<p>Create a directory to hold the ROM contents and copy the results of the build into it:</p>
<pre><code>$ mkdir rom
$ cd rom
$ rsync -rL ../out/target/product/hammerhead/system .
$ rsync -rL ../out/target/product/hammerhead/data .
$ cp ../out/target/product/hammerhead/boot.img .
</code></pre>
<p>For the <code>rsync</code> copy I deliberately choose not to copy symbolic links and to instead re-copy the original file. I had difficulty getting symbolic links working and need to investigate.</p>
<p>An Android ROM requires a <code>META-INF</code> directory containing a script that performs the update process. The following commands create this directory, copy the binary to run the script and the script itself:</p>
<pre><code>$ mkdir -p META-INF/com/google/android/
$ cp ../tools/update-tools/bin/gonk/update-binary META-INF/com/google/android/
$ curl http://bluishcoder.co.nz/b2g/updater-script >META-INF/com/google/android/updater-script
</code></pre>
<p>The updater script is one I wrote based on existing ones. It's pretty easy to follow if you want to read and change it.</p>
<p>The final step is to ZIP the directories, sign them and push to a directory on the device:</p>
<pre><code>$ zip -r9 b2g.zip *
$ java -jar ../prebuilts/sdk/tools/lib/signapk.jar \
../build/target/product/security/testkey.x509.pem \
../build/target/product/security/testkey.pk8 \
b2g.zip signed_b2g.zip
$ adb push signed_b2g.zip /sdcard/
</code></pre>
<h2>Install Firefox OS ROM</h2>
<p>Boot into recovery mode by pressing volume down and the power on button at the same time (or run <code>adb reboot recovery</code>). From the recovery menu choose 'Advanced' followed by 'MultiROM', then <code>Add ROM</code>.</p>
<p>Make sure <code>Android</code> is selected and <code>Don't Share</code> is chosen for "Share Kernel with Internal ROM". Click <code>Next</code>, choose <code>Zip file</code> and select the file we created in the signing step previously. Swipe to confirm as requested.</p>
<p>If this succeeds, Reboot and touch the screen during the 'Auto boot' display to get the list of ROMS to run. Choosing the one we just installed should boot Firefox OS.</p>
<h2>Other ROMs</h2>
<p>With MultiROM you can install other ROMS and even <a href="http://www.ubuntu.com/phone">Ubuntu Touch</a>. I'd like to get <a href="http://bluishcoder.co.nz/2012/06/11/building-inferno-os-for-android-phones.html">Inferno OS</a> running under MultiROM as well so I can boot between all the operating systems I like to tinker with on one device.</p>
<h2>Try it</h2>
<p>I've placed a complete Firefox OS ROM for use with MultiROM on the Nexus 5 in <a href="https://mega.co.nz/#!OFwDDY7B!EqofLz_EGDN3yjz5AaotSQW5PfnZ8CP4VvcdlWOVkQA">b2g_nexus5.zip</a>. This was built from B2G <code>master</code> branch so may be broken in various aspects (The camera doesn't work for example) but will allow you to try the multi boot process out if you can't do builds. This is not an official Mozilla build and was generated by me personally. Use at your own risk.</p>
Firefox Development on NixOS2014-05-15T19:00:00+12:00http://bluishcoder.co.nz/2014/05/15/firefox-development-on-nixos<p>Now that I've <a href="http://bluishcoder.co.nz/2014/05/14/installing-nixos-with-encrypted-root-on-thinkpad-w540.html">got NixOS installed</a> I needed a way to build and make changes to Firefox and Firefox OS. This post goes through the approach I've taken to work on the Firefox codebase. In a later post I'll build on this to do Firefox OS development.</p>
<p>Building Firefox isn't difficult as NixOS has definitions for standard Firefox builds to follow as examples. To build from a local source repository it requires all the pre-requisite packages to be installed. I don't want to pollute my local user environment with all these packages though as I develop on other things which may have version clashes. As an example, Firefox requires <code>autoconf-2.13</code> whereas other systems I develop with require different verisons.</p>
<p>NixOS (through the Nix package manager) allows setting up build environments that contain specific packages and versions. Switching between these is easy. The file <code>~/.nixpkgs/config.nix</code> can contain definitions specific for a user. I add the definitions as a <code>packageOverride</code> in this file. The structure of the file looks like:</p>
<pre><code>{
packageOverrides = pkgs : with pkgs; rec {
..new definitions here..
};
}
</code></pre>
<p>My definition for a build environment for Firefox is:</p>
<pre><code>firefoxEnv = pkgs.myEnvFun {
name = "firefoxEnv";
buildInputs = [ stdenv pkgconfig gtk glib gobjectIntrospection
dbus_libs dbus_glib alsaLib gcc xlibs.libXrender
xlibs.libX11 xlibs.libXext xlibs.libXft xlibs.libXt
ats pango freetype fontconfig gdk_pixbuf cairo python
git autoconf213 unzip zip yasm alsaLib dbus_libs which atk
gstreamer gst_plugins_base pulseaudio
];
extraCmds = ''
export C_INCLUDE_PATH=${dbus_libs}/include/dbus-1.0:${dbus_libs}/lib/dbus-1.0/include
export CPLUS_INCLUDE_PATH=${dbus_libs}/include/dbus-1.0:${dbus_libs}/lib/dbus-1.0/include
LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:${gcc.gcc}/lib64
for i in $nativeBuildInputs; do
LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:\$i/lib
done
export LD_LIBRARY_PATH
export AUTOCONF=autoconf
'';
};
</code></pre>
<p>The Nix function <code>pkgs.myEnvFun</code> creates a program that can be run by the user to set up the environment such that the listed packages are available. This is done using symlinks and environment variables. The resulting shell can then be used for normal development. By creating special environments for development tasks it becomes possible to build with different versions of packages. For example, replace <code>gcc</code> with <code>gcc46</code> and the environment will use that C compiler version. Environments for different versions of pango, gstreamer and other libraries can easily be created for testing Firefox builds with those specific versions.</p>
<p>The <code>buildInputs</code> field contains an array of the packages to be avaliable. These are all the pre-requisites <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Linux_Prerequisites">as listed in the Mozilla build documentation</a>. This could be modified by adding developer tools to be used (Vim, Emacs, Mercurial, etc) if desired.</p>
<p>When creating definitions that have a build product Nix will arrange the dynamic loader and paths to link to the correct versions of the libraries so that they can be found at runtime. When building an environment we need to change <code>LD_LIBRARY_PATH</code> to include the paths to the libraries for all the packages we are using. This is what the <code>extraCmds</code> section does. It is a shell script that is run to setup additional things for the environment.</p>
<p>The <code>extraCmds</code> in this definition adds to <code>LD_LIBRARY_PATH</code> the <code>lib</code> directory of all the packages in <code>buildInputs</code>. It exports an <code>AUTOCONF</code> environment variable to be the <code>autoconf</code> executable we are using. This variable is used in the Mozilla build system to find <code>autoconf-2.13</code>. It also adds to the C and C++ include path to find the DBus libraries which are in a nested <code>dbus-1.0</code> directory.</p>
<p>To build and install this new package use <code>nix-env</code>:</p>
<pre><code>$ nix-env -i env-firefoxEnv
</code></pre>
<p>Running the resulting <code>load-env-firefoxEnv</code> command will create a shell environment that can be used to build Firefox:</p>
<pre><code>$ load-env-firefoxEnv
...
env-firefoxEnv loaded
$ git clone git://github.com/mozilla/gecko-dev
...
$ cd gecko-dev
$ ./mach build
</code></pre>
<p>Exiting the shell will remove access to the pre-requisite libraries and tools needed to build Firefox. This keeps your global user environment free and minimizes the chance of clashes.</p>
Installing NixOS on a ThinkPad W540 with encrypted root2014-05-14T12:00:00+12:00http://bluishcoder.co.nz/2014/05/14/installing-nixos-with-encrypted-root-on-thinkpad-w540<p>I recently got a ThinkPad W540 laptop and I'm trying out the <a href="https://nixos.org/nixos/">NixOS</a> Linux distribution:</p>
<blockquote><p>NixOS is a GNU/Linux distribution that aims to improve the state of the art in system configuration management. In existing distributions, actions such as upgrades are dangerous: upgrading a package can cause other packages to break, upgrading an entire system is much less reliable than reinstalling from scratch, you can't safely test what the results of a configuration change will be, you cannot easily undo changes to the system, and so on.</p></blockquote>
<p>I use the <a href="https://nixos.org/nix/">Nix package manager</a> alongside other distributions and decided to try out the full operating system. This post outlines the steps I took to install NixOS with full disk encryption using LVM on LUKS.</p>
<h2>Windows</h2>
<p>The W540 comes with Windows 8.1 pre-installed and recovery partitions to enable rebuilding the system. I followed the install procedure to get Windows working and proceeding to make a <a href="http://support.lenovo.com/en_US/downloads/detail.page?DocID=HT076024">recovery USB drive</a> so I could get back to the starting state if things went wrong. Once this completed I went on with installing NixOS.</p>
<h2>NixOS Live CD</h2>
<p>I used the <a href="http://nixos.org/nixos/download.html">NixOS Graphical Live CD</a> to install. I could have used the minimal CD but I went fo the graphical option to make sure the basic OS worked fine on the hardware. I installed the Live CD to a USB stick from another Linux machine using <a href="http://unetbootin.sourceforge.net/">unetbootin</a>.</p>
<p>To boot from this I had to change the W540 BIOS settings:</p>
<ul>
<li>Change the USB drive in the boot sequence so it was the first boot option.</li>
<li>Disable Secure Boot.</li>
<li>Change UEFI to be UEFI/Legacy Bios from the previous UEFI only setting.</li>
</ul>
<p>Booting from the USB drive on the W540 worked fine and got me to a login prompt. Logging in with <code>root</code> and no password gives a root shell. Installation can proceed from there or the GUI can be started with <code>start display-manager</code>.</p>
<h2>Networking</h2>
<p>The installation process requires a connected network. I used a wireless network. This is configured in the Live CD using <a href="http://hostap.epitest.fi/wpa_supplicant/">wpa_supplicant</a>. This required editing <code>/etc/wpa_supplicant.conf</code> to contain the settings for the network I was connecting to. For a public nework it was something like:</p>
<pre><code>network={
ssid="My Network"
key_mgmt=NONE
}
</code></pre>
<p>The <code>wpa_supplicant</code> service needs to be restarted after this:</p>
<pre><code># systemctl restart wpa_supplicant.service
</code></pre>
<p>It's important to get the syntax of the <code>wpa_supplicant.conf</code> file correct otherwise it will fail to restart with no visible error.</p>
<h2>Partition the disk</h2>
<p>Partitioning is done manually using <a href="http://www.rodsbooks.com/gdisk/">gdisk</a>. Three partitions are needed:</p>
<ul>
<li>A small partition to hold GPT information and provide a place for GRUB to store data. I made this 1MB in size and it must have a partition type of <code>ef02</code>. This was <code>/dev/sda1</code>.</li>
<li>An unencrypted boot partition used to start the initial boot, and load the encrypted partition. I made this 1GB in size (which is on the large side for what it needs to be) and left it at the partition type <code>8300</code>. This was <code>/dev/sda2</code>.</li>
<li>The full disk encrypted partition. This was set to the size of the rest of the drive and partition type set to <code>8e00</code> for "Linux LVM". This was <code>/dev/sda3</code>.</li>
</ul>
<h2>Create encrypted partitions</h2>
<p>Once the disk is partitioned above we need to encrypt the main root partition and use LVM to create logical partitions within it for swap and root:</p>
<pre><code># cryptsetup luksFormat /dev/sda3
# cryptsetup luksOpen /dev/sda3 enc-pv
# pvcreate /dev/mapper/enc-pv
# vgcreate vg /dev/mapper/enc-pv
# lvcreate -L 40G -n swap vg
# lvcreate -l 111591 -n root vg
</code></pre>
<p>The <code>lvcreate</code> commands create the logical partitions. The first is a 40GB swap drive. The laptop has 32GB of memory so I set this to be enough to store all of memory when hibernating plus extra. It could be made quite a bit smaller. The second creates the root partition. I use the <code>-l</code> switch there to set the exact number of extents for the size. I got this number by trying a <code>-L</code> with a larger size than the drive and used the number in the resulting error message.</p>
<h2>Format partitions</h2>
<p>The unencrypted boot partition is formatted with <code>ext2</code> and the root partition with <code>ext4</code>:</p>
<pre><code># mkfs.ext2 -L boot /dev/sda2
# mkfs.ext4 -O dir_index -j -L root /dev/vg/root
# mkswap -L swap /dev/vg/swap
</code></pre>
<p>These should be mounted for the install process as follows:</p>
<pre><code># mount /dev/bg/root /mnt
# mkdir /mnt/boot
# mount /dev/sda2 /mnt/boot
# swapon /dev/vg/swap
</code></pre>
<h2>Configure NixOS</h2>
<p>NixOS uses a declarative language for the configuration file that is used to install and configure the operating system. An initial file ready to be edited should be created with:</p>
<pre><code>$ nixos-generate-config --root /mnt
</code></pre>
<p>This creates the following files:</p>
<ul>
<li><code>/mnt/etc/nixos/configuration.nix</code></li>
<li><code>/mnt/etc/nixos/hardware-configuration.nix</code></li>
</ul>
<p>The latter file is rewritten everytime this command is run. The first file can be edited and is never rewritten. For the initial boot I had to make one change to <code>hardware-configuration.nix</code>. I commented out this line:</p>
<pre><code># services.xserver.videoDrivers = [ "nvidia" ];
</code></pre>
<p>I can re-add it later when configuring the X server if I want to use the <code>nvidia</code> driver.</p>
<p>The changes that need to be made to <code>configuration.nix</code> involve setting the GRUB partition, the Luks data and any additional packages to be installed. The Luks settings I added were:</p>
<pre><code>boot.initrd.luks.devices = [
{
name = "root"; device = "/dev/sda3"; preLVM = true;
}
];
</code></pre>
<p>I changed the GRUB boot loader device to be:</p>
<pre><code>boot.loader.grub.device = "/dev/sda";
</code></pre>
<p>To enable wireless I made sure I had:</p>
<pre><code>networking.wireless.enable = true;
</code></pre>
<p>I added my preferred editor, vim, to the system packages:</p>
<pre><code>environment.systemPackages = with pkgs; [
vim
];
</code></pre>
<p>Enable OpenSSH:</p>
<pre><code>services.openssh.enable = true;
</code></pre>
<p>I've left configuring X and other things for later.</p>
<h2>Install NixOS</h2>
<p>To install based on the configuration made above:</p>
<pre><code># nixos-install
</code></pre>
<p>If that completes successfully the system can be rebooted into the newly installed NixOS:</p>
<pre><code># reboot
</code></pre>
<p>You'll need to enter the encryption password that was created during <code>cryptsetup</code> when rebooting.</p>
<h2>Completing installation</h2>
<p>Once rebooted re-enable the network by performing the <code>/etc/wpa_supplicant.conf</code> steps done during the install.</p>
<p>Installation of additional packages can continue following the <a href="http://nixos.org/nixos/manual/">NixOS manual</a>. This mostly involves adding or changing settings in <code>/etc/nixos/configuration.nix</code> and then running:</p>
<pre><code># nixos-rebuild switch
</code></pre>
<p>This is outlined in <a href="http://nixos.org/nixos/manual/#sec-changing-config">Changing the Configuration</a> in the manual.</p>
<h2>Troubleshooting</h2>
<p>The most common errors I made were syntax errors in <code>wpa_supplicant.conf</code> and <code>configuration.nix</code>. The other issue I had was not creating the initial GPT partition. GRUB will give an error in this case explaining the issue. You can reboot the Live USB drive at any time and mount the encrypted drives to edit files if needed. The commands to mount the drives are:</p>
<pre><code># cryptsetup luksOpen /dev/sda3 enc-pv
# vgscan --mknodes
# vgchange -ay
# mount /dev/bg/root /mnt
# mkdir /mnt/boot
# mount /dev/sda2 /mnt/boot
# swapon /dev/vg/swap
</code></pre>
<h2>Tips</h2>
<p><code>environment.systemPackages</code> in <code>/etc/configuration.nix</code> is where you add packages that are seen by all users. When this is changed you need to run the following for it to take effect:</p>
<pre><code># nixos-rebuild switch
</code></pre>
<p>To find the package name to use, run something like (for vim):</p>
<pre><code>$ nix-env -qaP '*'|grep vim
</code></pre>
<p>A user can add their own packages using:</p>
<pre><code>$ nix-env -i vim
</code></pre>
<p>And remove with:</p>
<pre><code>$ nix-env -e vim
</code></pre>
<p>A useful GUI for connecting to wireless networks is <code>wpa_gui</code>. To enable this add <code>wpa_supplicant_gui</code> to <code>environment.systemPackages</code> in <code>/etc/nixos/configuration.nix</code> followed by a <code>nixos-rebuild switch</code>. Add the following line to <code>/etc/wpa_supplicant.conf</code>:</p>
<pre><code>ctrl_interface=/var/run/wpa_supplicant
</code></pre>
<p>Restart <code>wpa_supplicant</code> and run the gui:</p>
<pre><code>$ systemctl restart wpa_supplicant.service
$ sudo wpa_gui
</code></pre>
<p>It's possible to make custom changes to Nix packages for each user. This is controlled by adding definitions to <code>~/.nixpkgs/config.nix</code>. The following <code>config.nix</code> will provide Firefox with the official branding:</p>
<pre><code>{
packageOverrides = pkgs : with pkgs; rec {
firefoxPkgs = pkgs.firefoxPkgs.override { enableOfficialBranding = true; };
};
}
</code></pre>
<p>Installing or re-installing for the user will use this version of Firefox:</p>
<pre><code>$ nix-env -i firefox
</code></pre>
Preventing heartbleed bugs with safe programming languages2014-04-11T16:00:00+12:00http://bluishcoder.co.nz/2014/04/11/preventing-heartbleed-bugs-with-safe-languages<p>The <a href="http://heartbleed.com/">Heartbleed bug</a> in OpenSSL has resulted in a fair amount of damage across the internet. The bug itself was <a href="http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html">quite simple</a> and is a textbook case for why programming in unsafe languages like C can be problematic.</p>
<p>As an experiment to see if a safer systems programming language could have prevented the bug I tried rewriting the problematic function in the <a href="http://www.ats-lang.org/">ATS programming language</a>. I've <a href="http://bluishcoder.co.nz/tags/ats">written about ATS as a safer C</a> before. This gives a real world testcase for it. I used the latest version of ATS, called ATS2.</p>
<p>ATS compiles to C code. The function interfaces it generates can exactly match existing C functions and be callable from C. I used this feature to replace the <code>dtls1_process_heartbeat</code> and <code>tls1_process_heartbeat</code> functions in OpnSSL with ATS versions. These two functions are the ones that were patched to correct the heartbleed bug.</p>
<p>The approach I took was to follow something similar to that <a href="http://sourceforge.net/p/ats-lang/mailman/message/32204291/">outlined by John Skaller</a> on the ATS mailing list:</p>
<pre><code>ATS on the other hand is basically C with a better type system.
You can write very low level C like code without a lot of the scary
dependent typing stuff and then you will have code like C, that
will crash if you make mistakes.
If you use the high level typing stuff coding is a lot more work
and requires more thinking, but you get much stronger assurances
of program correctness, stronger than you can get in Ocaml
or even Haskell, and you can even hope for *better* performance
than C by elision of run time checks otherwise considered mandatory,
due to proof of correctness from the type system. Expect over
50% of your code to be such proofs in critical software and probably
90% of your brain power to go into constructing them rather than
just implementing the algorithm. It's a paradigm shift.
</code></pre>
<p>I started with wrapping the C code directly and calling that from ATS. From there I rewrote the C code into unsafe ATS. Once that worked I added types to find errors.</p>
<p>I've put the modified OpenSSl code in <a href="https://github.com/doublec/openssl/branches">a github fork</a>. The two branches there, <code>ats</code> and <code>ats_safe</code>, represent the latter two stages in implementing the functions in ATS.</p>
<p>I'll give a quick overview of the different paths I took then go into some detail about how I used ATS to find the errors.</p>
<h2>Wrapping C code</h2>
<p>I've <a href="http://bluishcoder.co.nz/2011/04/24/converting-c-programs-to-ats.html">written about this approach before</a>. ATS allows embedding C directly so the first start was to embed the <code>dtls1_process_heartbeat</code> C code in an ATS file, call that from an ATS function and expose that ATS function as the real <code>dtls1_process_heartbeat</code>. The code for this is in <a href="https://github.com/doublec/openssl/blob/f40838fd8b2c4ba907865eef54e5cca96dc0c62f/ssl/d1_both.dats">my first attempt of d1_both.dats</a>.</p>
<h2>Unsafe ATS</h2>
<p>The second stage was to write the functions using ATS but unsafely. This code is a direct translation of the C code with no additional typechecking via ATS features. It uses usafe ATS code. The <a href="https://github.com/doublec/openssl/blob/ats/ssl/d1_both.dats">rewritten d1_both.dats</a> contains this version.</p>
<p>The code is quite ugly but compiles and matches the C version. When installed on a test system it shows the heartbleed bug still. It uses all the pointer arithmetic and hard coded offsets as the C code. Here's a snippet of one branch of the function:</p>
<pre><code>val buffer = OPENSSL_malloc(1 + 2 + $UN.cast2int(payload) + padding)
val bp = buffer
val () = $UN.ptr0_set<uchar> (bp, TLS1_HB_RESPONSE)
val bp = ptr0_succ<uchar> (bp)
val bp = s2n (payload, bp)
val () = unsafe_memcpy (bp, pl, payload)
val bp = ptr_add (bp, payload)
val () = RAND_pseudo_bytes (bp, padding)
val r = dtls1_write_bytes (s, TLS1_RT_HEARTBEAT, buffer, 3 + $UN.cast2int(payload) + padding)
val () = if r >=0 && ptr_isnot_null (get_msg_callback (s)) then
call_msg_callback (get_msg_callback (s),
1, get_version (s), TLS1_RT_HEARTBEAT,
buffer, $UN.cast2uint (3 + $UN.cast2int(payload) + padding), s,
get_msg_callback_arg (s))
val () = OPENSSL_free (buffer)
</code></pre>
<p>It should be pretty easy to follow this comparing the code to the C version.</p>
<h2>Safer ATS</h2>
<p>The third stage was adding types to the unsafe ATS version to check that the pointer arithmetic is correct and no bounds errors occur. This <a href="https://github.com/doublec/openssl/blob/12b89f1b2d714835b9257c10bcc5fd210714d07d/ssl/d1_both.dats">version of d1_both.dats</a> fails to compile if certain bounds checks aren't asserted. If the <code>assertloc</code> at <a href="https://github.com/doublec/openssl/blob/12b89f1b2d714835b9257c10bcc5fd210714d07d/ssl/d1_both.dats#l123">line 123</a>, <a href="https://github.com/doublec/openssl/blob/12b89f1b2d714835b9257c10bcc5fd210714d07d/ssl/d1_both.dats#l178">line 178</a> or <a href="https://github.com/doublec/openssl/blob/12b89f1b2d714835b9257c10bcc5fd210714d07d/ssl/d1_both.dats#l193">line 193</a> is removed then a constraint error is produced. This error is effectively preventing the heartbleed bug.</p>
<h2>Testable Vesion</h2>
<p>The last stage I did was to implement the fix to the <code>tls1_process_heartbeat</code> function and factor out some of the helper routines so it could be shared. This is in the <a href="https://github.com/doublec/openssl/tree/ats_safe">ats_safe</a> branch which is where the newer changes are happening. This version removes the <code>assertloc</code> usage and changes to graceful failure so it could be tested on a live site.</p>
<p>I tested this version of OpenSSL and heartbleed test programs fail to dump memory.</p>
<h2>The approach to safety</h2>
<p>The <code>tls_process_heartbeat</code> function obtains a pointer to data provided by the sender and the amount of data sent from one of the OpenSSL internal structures. It expects the data to be in the following format:</p>
<pre><code> byte = hbtype
ushort = payload length
byte[n] = bytes of length 'payload length'
byte[16]= padding
</code></pre>
<p>The existing C code makes the mistake of trusting the 'payload length' the sender supplies and passes that to a memcpy. If the actual length of the data is less than the 'payload length' then random data from memory gets sent in the response.</p>
<p>In ATS pointers can be manipulated at will but they can't be dereferenced or used unless there is a <code>view</code> in scope that proves what is located at that memory address. By passing around views, and subsets of views, it becomes possible to check that ATS code doesn't access memory it shouldn't. Views become like capabilities. You hand them out when you want code to have the capability to do things with the memory safely and take it back when it's done.</p>
<h2>Views</h2>
<p>To model what the C code does I created an ATS view that represents the layout of the data in memory:</p>
<pre><code>dataview record_data_v (addr, int) =
| {l:agz} {n:nat | n > 16 + 2 + 1} make_record_data_v (l, n) of (ptr l, size_t n)
| record_data_v_fail (null, 0) of ()
</code></pre>
<p>A 'view' is like a standard ML datatype but exists at type checking time only. It is erased in the final version of the program so has no runtime overhead. This view has two constructors. The first is for data held at a memory address <code>l</code> of length <code>n</code>. The length is constrained to be greater than <code>16 + 2 + 1</code> which is the size of the 'byte', 'ushort' and 'padding' mentioned previously. By putting the constraint here we immediately force anyone creating this view to check the length they pass in. The second constructor, <code>record_data_v_fail</code>, is for the case of an invalid record buffer.</p>
<p>The function that creates this view looks like:</p>
<pre><code>implement get_record (s) =
val len = get_record_length (s)
val data = get_record_data (s)
in
if len > 16 + 2 + 1 then
(make_record_data_v (data, len) | data, len)
else
(record_data_v_fail () | null_ptr1 (), i2sz 0)
end
</code></pre>
<p>Here the <code>len</code> and <code>data</code> are obtained from the SSL structure. The length is checked and the view is created and returned along with the pointer to the data and the length. If the length check is removed there is a compile error due to the constraint we placed earlier on <code>make_record_data_v</code>. Calling code looks like:</p>
<pre><code>val (pf_data | p_data, data_len) = get_record (s)
</code></pre>
<p><code>p_data</code> is a pointer. <code>data_len</code> is an unsigned value and <code>pf_data</code> is our view. In my code the <code>pf_</code> suffix denotes a proof of some sort (in this case the view) and <code>p_</code> denotes a pointer.</p>
<h2>Proof functions</h2>
<p>In ATS we can't do anything at all with the <code>p_data</code> pointer other than increment, decrement and pass it around. To dereference it we must obtain a view proving what is at that memory address. To get speciailized views specific for the data we want I created some proof functions that convert the <code>record_data_v</code> view to views that provide access to memory. These are the proof functions:</p>
<pre><code>(* These proof functions extract proofs out of the record_data_v
to allow access to the data stored in the record. The constants
for the size of the padding, payload buffer, etc are checked
within the proofs so that functions that manipulate memory
are checked that they remain within the correct bounds and
use the appropriate pointer values
*)
prfun extract_data_proof {l:agz} {n:nat}
(pf: record_data_v (l, n)):
(array_v (byte, l, n),
array_v (byte, l, n) -<lin,prf> record_data_v (l,n))
prfun extract_hbtype_proof {l:agz} {n:nat}
(pf: record_data_v (l, n)):
(byte @ l, byte @ l -<lin,prf> record_data_v (l,n))
prfun extract_payload_length_proof {l:agz} {n:nat}
(pf: record_data_v (l, n)):
(array_v (byte, l+1, 2),
array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n))
prfun extract_payload_data_proof {l:agz} {n:nat}
(pf: record_data_v (l, n)):
(array_v (byte, l+1+2, n-16-2-1),
array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n))
prfun extract_padding_proof {l:agz} {n:nat} {n2:nat | n2 <= n - 16 - 2 - 1}
(pf: record_data_v (l, n), payload_length: size_t n2):
(array_v (byte, l + n2 + 1 + 2, 16),
array_v (byte, l + n2 + 1 + 2, 16) -<lin, prf> record_data_v (l, n))
</code></pre>
<p>Proof functions are run at type checking time. They manipulate proofs. Let's breakdown what the <code>extract_hbtype_proof</code> function does:</p>
<pre><code>prfun extract_hbtype_proof {l:agz} {n:nat}
(pf: record_data_v (l, n)):
(byte @ l, byte @ l -<lin,prf> record_data_v (l,n))
</code></pre>
<p>This function takes a single argument, <code>pf</code>, that is a <code>record_data_v</code> instance for an address <code>l</code> and length <code>n</code>. This proof argument is consumed. Once it is called it cannot be accessed again (it is a linear proof). The function returns two things. The first is a proof <code>byte @ l</code> which says "there is a byte stored at address <code>l</code>". The second is a linear proof function that takes the first proof we returned, consumes it so it can't be reused, and returns the original proof we passed in as an argument.</p>
<p>This is a fairly common idiom in ATS. What it does is takes a proof, destroys it, returns a new one and provides a way of destroying the new one and bring back the old one. Here's how the function is used:</p>
<pre><code>prval (pf, pff) = extract_hbtype_proof (pf_data)
val hbtype = $UN.cast2int (!p_data)
prval pf_data = pff (pf)
</code></pre>
<p><code>prval</code> is a declaration of a proof variable. <code>pf</code> is my idiomatic name for a proof and <code>pff</code> is what I use for proof functions that destroy proofs and return the original.</p>
<p>The <code>!p_data</code> is similar to <code>*p_data</code> in C. It dereferences what is held at the pointer. When this happens in ATS it searches for a proof that we can access some memory at <code>p_data</code>. The <code>pf</code> proof we obtained says we have a <code>byte @ p_data</code> so we get a byte out of it.</p>
<p>A more complicated proof function is:</p>
<pre><code>prfun extract_payload_length_proof {l:agz} {n:nat}
(pf: record_data_v (l, n)):
(array_v (byte, l+1, 2),
array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n))
</code></pre>
<p>The <code>array_v</code> view repesents a contigous array of memory. The three arguments it takes are the type of data stored in the array, the address of the beginning, and the number of elements. So this function consume the <code>record_data_v</code> and produces a proof saying there is a two element array of bytes held at the 1st byte offset from the original memory address held by the record view. Someone with access to this proof cannot access the entire memory buffer held by the SSL record. It can only get the 2 bytes from the 1st offset.</p>
<h2>Safe memcpy</h2>
<p>One more complicated view:</p>
<pre><code>prfun extract_payload_data_proof {l:agz} {n:nat}
(pf: record_data_v (l, n)):
(array_v (byte, l+1+2, n-16-2-1),
array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n))
</code></pre>
<p>This returns a proof for an array of bytes starting at the 3rd byte of the record buffer. Its length is equal to the length of the record buffer less the size of the payload, and first two data items. It's used during the <code>memcpy</code> call like so:</p>
<pre><code>prval (pf_dst, pff_dst) = extract_payload_data_proof (pf_response)
prval (pf_src, pff_src) = extract_payload_data_proof (pf_data)
val () = safe_memcpy (pf_dst, pf_src
add_ptr1_bsz (p_buffer, i2sz 3),
add_ptr1_bsz (p_data, i2sz 3),
payload_length)
prval pf_response = pff_dst(pf_dst)
prval pf_data = pff_src(pf_src)
</code></pre>
<p>By having a proof that provides access to only the payload data area we can be sure that the <code>memcpy</code> can not copy memory outside those bounds. Even though the code does manual pointer arithmetic (the <code>add_ptr1_bsz</code>function) this is safe. An attempt to use a pointer outside the range of the proof results in a compile error.</p>
<p>The same concept is used when setting the padding to random bytes:</p>
<pre><code>prval (pf, pff) = extract_padding_proof (pf_response, payload_length)
val () = RAND_pseudo_bytes (pf |
add_ptr_bsz (p_buffer, payload_length + 1 + 2),
padding)
prval pf_response = pff(pf)a
</code></pre>
<h2>Runtime checks</h2>
<p>The code does runtime checks that constrain the bounds of various length variables:</p>
<pre><code>if payload_length > 0 then
if data_len >= payload_length + padding + 1 + 2 then
...
...
</code></pre>
<p>Without those checks a compile error is produced. The original heartbeat flaw was the absence of similar runtime checks. The code as structured can't suffer from that flaw and still be compiled.</p>
<h2>Testing</h2>
<p>This code can be built and tested. First step is to <a href="http://www.ats-lang.org/DOWNLOAD/">install ATS2</a>:</p>
<pre><code>$ tar xvf ATS2-Postiats-0.0.7.tgz
$ cd ATS2-Postiats-0.0.7
$ ./configure
$ make
$ export PATSHOME=`pwd`
$ export PATH=$PATH:$PATSHOME/bin
</code></pre>
<p>Then compile the openssl code with my ATS additions:</p>
<pre><code>$ git clone https://github.com/doublec/openssl
$ cd openssl
$ git checkout -b ats_safe origin/ats_safe
$ ./config
$ make
$ make test
</code></pre>
<p>Try changing some of the code, modifying the constraints tests etc, to get an idea of what it is doing.</p>
<p>For testing in a VM, I installed Ubuntu, setup an nginx instance serving an HTTPS site and did something like:</p>
<pre><code>$ git clone https://github.com/doublec/openssl
$ cd openssl
$ git diff 5219d3dd350cc74498dd49daef5e6ee8c34d9857 >~/safe.patch
$ cd ..
$ apt-get source openssl
$ cd openssl-1.0.1e/
$ patch -p1 <~/safe.patch
...might need to fix merge conflicts here...
$ fakeroot debian/rules build binary
$ cd ..
$ sudo dpkg -i libssl1.0.0_1.0.1e-3ubuntu1.2_amd64.deb \
libssl-dev_1.0.1e-3ubuntu1.2_amd64.deb
$ sudo /etc/init.d/nginx restart
</code></pre>
<p>You can then use a heartbleed tester on the HTTPS server and it should fail.</p>
<h2>Conclusion</h2>
<p>I think the approach of converting unsafe C code piecemeal worked quite well in this instance. Being able to combine existing C code and ATS makes this much easier. I only concentrated on detecting this particular programming error but it would be possible to use other ATS features to detect memory leaks, abstraction violations and other things. It's possible to <a href="http://bluishcoder.co.nz/2012/08/30/safer-handling-of-c-memory-in-ats.html">get very specific</a> in defining safe interfaces at a cost of complexity in code.</p>
<p>Although I've used ATS for production code this is my first time using ATS2. I may have missed idioms and library functions to make things easier so try not to judge the verbosity or difficulty of the code based on this experiement. The <a href="http://www.ats-lang.org/COMMUNITY/">ATS community</a> is helpful in picking up the language. My approach to doing this was basically add types then work through the compiler errors fixing each one until it builds.</p>
<p>One immediate question becomes "How do you trust your proof". The <code>record_data_v</code> view and the proof functions that manipulate it define the level of checking that occurs. If they are wrong then the constraints checked by the program will be wrong. It comes down to having a trusted kernel of code (in this case the proof and view) and users of that kernel can then be trusted to be correct. Incorrect use of the kernel is what results in the stronger safety. From an auditing perspective it's easier to check the small trusted kernel and then know the compiler will make sure pointer manipulations are correct.</p>
<p>The ATS specific additions are in the following files:</p>
<ul>
<li><a href="https://github.com/doublec/openssl/blob/ats_safe/ssl/d1_both.dats">d1_both.dats</a></li>
<li><a href="https://github.com/doublec/openssl/blob/ats_safe/ssl/t1_lib.dats">t1_lib.dats</a></li>
<li><a href="https://github.com/doublec/openssl/blob/ats_safe/ssl/shared.sats">shared.sats</a></li>
<li><a href="https://github.com/doublec/openssl/blob/ats_safe/ssl/shared.dats">shared.dats</a></li>
<li><a href="https://github.com/doublec/openssl/blob/ats_safe/ssl/shared.cats">shared.cats</a></li>
</ul>
HTML Media support in Firefox2013-08-21T15:00:00+12:00http://bluishcoder.co.nz/2013/08/21/html-media-support-in-firefox<p>Some time ago I wrote a <a href="https://groups.google.com/forum/?fromgroups=#!topic/mozilla.dev.media/o3OuUVbetYg">dev-media post</a> outlining the media formats we support and requesting discussion on future formats. This post summarizes the results of that and changes that have occurred since then.</p>
<p>In general Mozilla is trying to limit the proliferation of media formats on the web. One reason for doing this is to make it easier for a website to provide media that they know can be played by the majority of web users. Ideally the formats supported would be freely implementable by anyone in any user agent. This enables someone to share media on the web knowing that all users can watch it.</p>
<p>A counter argument is that limiting the formats to a small number restricts the producers of the media. They are unable to select the best formats for the type of media they are sharing that plays best on specific devices (for example, taking advantage of hardware acceleration on a device). The choice of what formats can be used would be left up to the operating system in this case and the browser would fall back to using operating system support when it wants to decode media data.</p>
<p>Taking the operating system codec approach would result in media on the web only being able to be played by a subset of users. If someone shares a <code>wmv</code> video file it will only play back on systems supporting that format. The list of all possible formats is huge and the intersection of formats supported across operating systems isn't great. We'd also like to avoid codec support on the web becoming a vector for malware. Evil sites could prompt users to install malware infected codecs required to play video from the site for example.</p>
<p>By building support for specific formats in the browser we hope to guarantee to media producers that their media will be viewable by all web users safely.</p>
<p>Firefox currently supports the following media formats where the decoding support is built into the browser:</p>
<ul>
<li><a href="http://www.opus-codec.org/">Opus</a> audio in an Ogg container on all platforms.</li>
<li><a href="http://www.vorbis.com/">Vorbis</a> audio in an Ogg or WebM container on all platforms.</li>
<li><a href="http://www.theora.org/">Theora</a> video in an Ogg container on all platforms.</li>
<li><a href="http://en.wikipedia.org/wiki/VP8">VP8</a> video in a WebM container on all platforms.</li>
<li><a href="http://en.wikipedia.org/wiki/WAV">WAV</a> audio on all platforms.</li>
</ul>
<p>Opus, Vorbis, Theora and VP8 have the advantage of being open source and usable without paying license fees for generating and viewing content. Currently the decoding support in Firefox for these formats is not hardware accelerated. All decoding is done in software.</p>
<p>Support for the following formats in Firefox uses operating system decoder support. This means coverage is not available across all platforms. We are working towards this goal.</p>
<ul>
<li><a href="http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC">H.264</a> video in an MP4 container on Firefox OS, some Android devices, and Desktop on Windows Vista and up.</li>
<li><a href="http://en.wikipedia.org/wiki/Advanced_Audio_Coding">AAC</a> audio in an MP4 container on Firefox OS, some Android devices, and Desktop on Windows Vista and up.</li>
<li><a href="http://en.wikipedia.org/wiki/MP3">MP3</a> audio in MP3 files on Firefox OS, some Android devices and Desktop on Windows Vista and up. Support for Windows XP is coming in Firefox 26.</li>
</ul>
<p>These formats may use hardware accelerated decoding depending on operating system support. Support for these formats on other platforms is ongoing. On Linux we will use <a href="http://gstreamer.freedesktop.org/">GStreamer</a>. GStreamer playback support is already landed but not built by default. Enabling it is tracked in <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=886181">bug 886181</a>. You can produce custom builds by using the configure switch <code>--enable-gstreamer</code> and setting the pref <code>media.gstreamer.enabled</code> to <code>true</code>.</p>
<p>Support for Mac OS X for H.264, AAC and MP3 is tracked by bugs <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=801521">801521</a> and <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=851290">851290</a>.</p>
<p>Only a subset of Android devices are supported. For Android we are <a href="http://bluishcoder.co.nz/2012/08/22/h264-aac-mp3-support-for-firefox-android.html">using libstagefright</a> to provide hardware accelerated decoding. This uses an internal Android API that some device manufacturers customize which results in changes being needed to support specific devices. As a result we whitelist and blacklist devices that we know do or don't work. You can find the list of <a href="https://wiki.mozilla.org/Blocklisting/Blocked_Graphics_Drivers#On_Android_2">blacklisted or supported Android devices</a> on the Mozilla wiki. <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=860599">Bug 860599</a> is a work in progress to implement a more reliable means of using libstagefright which should provide support on more devices.</p>
<p>All the formats listed above are what Mozilla has generally decided to support "on the web". That is, a web developer can expect to provide a video element with one of these formats as the source and expect it to play, if not now then at some point in the future.</p>
<p>There are some other formats that Firefox supports on specific operating systems or devices. These are primarily used in Firefox OS for system applications and to implement various mobile specifications. While supported on the device they are not guaranteed to be supported across all platforms or even on the web. They may only work on apps installed on the device. These formats are:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/3GP_and_3G2">3GPP</a> container on Firefox OS. This is used for MMS videos and video recording on the device.</li>
<li><a href="http://en.wikipedia.org/wiki/Adaptive_Multi-Rate_audio_codec">AMR</a> audio format on Firefox OS. This is used for MMS and can only be played "on device" in privileged apps.</li>
</ul>
<p>Other media formats may be supported in the future. <a href="http://en.wikipedia.org/wiki/VP9">VP9</a> and <a href="https://xiph.org/daala/">Daala</a> for example are possibilities. For an official list of supported media formats, and the versions of Firefox they became supported, there is a <a href="https://developer.mozilla.org/en-US/docs/HTML/Supported_media_formats">supported media formats</a> page on <a href="https://developer.mozilla.org/">MDN</a>. For discussion of formats and media in general there is the <a href="https://lists.mozilla.org/listinfo/dev-media">dev-media</a> mailing list.</p>