Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


2015-01-15

Decentralized Websites with ZeroNet

ZeroNet is a new project that aims to deliver a decentralized web. It uses a combination of bittorrent, a custom file server and a web based user interface to do this and manages to provide a pretty useable experience.

Users run a ZeroNet node and do their web browsing via the local proxy it provides. Website addresses are public keys, generated using the same algorithm as used for bitcoin addresses. A request for a website key results in the node looking in the bittorrent network for peers that are seeding the site. Peers are selected and ZeroNet connects to the peer directly to a custom file server that it implements. This is used to download the files required for the site. Bittorrent is only used for selecting peers, not for the site contents.

Once a site is retrieved the node then starts acting as a peer serving the sites content to users. The more users browsing your site, the more peers become available to provide the data. If the original site goes down the remaining peers can still serve the content.

Site updates are done by the owner making changes and then signing these changes with the private key for the site address. It then starts getting distributed to the peers that are seeding it.

Browsing is done through a standard web browser. The interface uses Websockets to communicate with the local node and receive real time information about site updates. The interface uses a sandboxed iframe to display websites.

Running

ZeroNet is open source and hosted on github. Everything is done through the one zeronet.py command. To run a node:

$ python zeronet.py
...output...

This will start the node and the file server. A check is made to see if the file server is available for connections externally. If this fails it displays a warning but the system still works. You won’t seed sites or get real time notification of site updates however. The fix for this is to open port 15441 in your firewall. ZeroNet can use UPNP to do this automatically but it requires a MiniUPNP binary for this to work. See the --upnpc command line switch for details.

The node can be accessed from a web browser locally using port 43110. Providing a site address as the path will access a particular ZeroNet site. For example, 1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr is the main ‘hello’ site that is first displayed. To access it you’d use the URL http://127.0.0.1:43110/1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr.

Creating a site

To create a site you first need to shut down your running node (using ctrl+c will do it) then run the siteCreate command:

$ python zeronet.py siteCreate
...
- Site private key: ...private key...
- Site address: ...site address...
...
- Site created!

You should record the private key and address as you will need them when updating the site. The command results in a data/address directory being created, where ‘address’ is the site address that siteCreate produced. Inside that is a couple of default files. One of these, content.json, contains JSON data listing the files contained within the site and signing information. This gets updated automatically when you sign your site after doing updates. If you edit the title key in this file you can give your site a title that appears in the user interface instead of the address.

Another flie that gets modified during this site creation process is the sites.json file in the data directory. It contains the list of all the sites and some metadata about them.

If you visit http://127.0.0.1:43110/siteaddress in your browser, where siteaddress is the address created with siteCreate, then you’ll see the default website that is created. If your node is peering successfully and you access this address from another node it will download the site, display it, and start seeding it. This is how the site data spreads through the network.

Updating a site

To change a site you must first store your files in the data/siteaddress directory. Any HTML, CSS, JavaScript, etc can be put here. It’s like a standard website root directory. Just don’t delete the config.json file that’s there. Once you’ve added, modified or removed files you run the siteSign command:

$ python zeronet.py siteSign siteaddress
- Signing site: siteaddress...
Private key (input hidden):

Now you enter the private key that was displayed (and hopefully you saved) when you ran siteCreate. The site gets signed and information stored in config.json. To publish these changes to peers seeding the site:

$ python zeronet.py sitePublish siteaddress
...publishes to peers...

If your node is running it will serve the files from the running instance. If it is not then the sitePublish command will continue running to serve the files.

Deleting a site

You can pause seeding a site from the user interface but you can’t delete it. To do that you must shutdown the node and delete the sites data/siteaddress directory manually. You will also need to remove its entry from data/sites.json. When you restart the node it will no longer appear.

Site tips

Because the website is displayed in a sandboxed iframe there are some restrictions in what it can do. The most obvious is that only relative URLs work in anchor elements. If you click on an absolute URL it does nothing. The sandboxed iframe has the allow-top-navigation option which means you can link to external pages or other ZeroNet sites if you use the target attribute of the anchor element and set it to _top. So this will work:

<a href="http://bluishcoder.co.nz/" target="_top">click me</a>

But this will not:

<a href="http://bluishcoder.co.nz/">click me</a>

Dynamic websites are supported, but requires help using centralized services. The ZeroNet node includes an example of a dynamic website called ‘ZeroBoard’. This site allows users to enter a message in a form and it’s published to a list of messages which all peering nodes will see. It does this by posting the message to an external web application that the author runs on the standard internet. This web app updates a file inside the sites ZeroNet directory and then signs it. The result is published to all peers and they automatically get the update through the Websocket interface.

Although this works it’s unfortunate the it relies on a centralized web application. The ZeroNet author has posted that they are looking at decentralized ways of doing this, maybe using bitmessage or some other system. Something involving peer to peer WebRTC would be interesting.

Conclusion

ZeroNet seems to be most similar to tor, i2p or freenet. Compared to these it lacks the anonymity and encryption aspects. But it decentralizes the site content which tor and i2p don’t. Freenet provides decentralization too but does not allow JavaScript in sites. ZeroNet does allow JavaScript but this has the usual security and tracking concerns.

Site addresses are in the same format as bitcoin addresses. It should be possible to import the private key into bitcoin and then bitcoins sent to the public address of a site would be accessed by the site owner. I haven’t tested this but I don’t see why it couldn’t be made to work. Maybe this could be leveraged somehow to enable a web payment method.

ZeroNet’s lack of encyption or obfuscation of the site contents could be a problem. A peer holds the entire site in a local directory. If this contains malicious or illegal content it can be accidentally run or viewed. Or it could be picked up in automated scans and the user held responsible. Even if the site originally had harmless content the site author could push an update out that contains problematic material. That’s a bit scary.

It’s early days for the project and hopefully some of these issues can be addressed. As it is though it works well, is very useable, and is an interesting experiement on decentralizing websites. Some links for more information:

Tags: mozilla  zeronet 

2015-01-08

Improving Linux Font Support in Self

In the Linux version of the Self programming language implementation the fonts used are standard X11 fonts. On modern Linux systems these don’t look great and a common question asked in the mailing list is how to improve it. Fonts on the Mac OS X build of Self use a different system and they look much better. It would be good to convert the Linux version to use freetype to gain more control over fonts.

I worked on adding Freetype support a couple of years ago and wrote about it on the mailing list. I haven’t done much on it since then but the code is in my github repository under the linux_fonts branch. That work adds Self primitives to access Freetype but does not integrate it into the Self font system. I hope to be able to continue this work sometime but I’m unlikely to get to it in the near future. This post is to point to the code, show how to use the primitives, in case someone else would like to take it forward as a project.

To try the code out you’ll need to build Self in the usual manner but using that branch. Once the desktop is launched I test it by creating an object with the following slots:

(| parent* = traits oddball.
   window <- desktop w anyWindowCanvas.
   draw <-- nil.
   font <- nil.
   xftcolor <- nil.
   xrcolor <- nil.
 |)

With an outliner for this object I create an evaulator for it and, one at a time, evaluate the following code snippets:

draw: window display xftDrawCreate: window platformWindow
  Visual: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen

xftcolor: xlib xftColor new.
xrcolor: xlib xRenderColor new.
xrcolor alpha: 16rffff

window display xftColorAllocValue: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen
  RenderColor: xrcolor
  XftColor: xftcolor

font: window display xftFontOpenNameOnScreen: window display screen number
 Name: 'verdana-18'

draw xftDrawString8: xftcolor
  Font: font X: 100 Y: 100
  String: 'Hello World!'

The results in the text ‘Hello World!’ appearing at position 100@100 on the desktop in the Verdana font. The video below demonstrates this to show how it’s done in the Self user interface:

This is a common workflow I use to prototype things in Self. I create an empty object and populate it with slots to hold data. With an evaluator created for that object these slots are accessible without needing to have an object to call them on. This is known as ‘implicit self calls’. The message for the slot implicitly is sent to the current object. I create and remove slots as needed. I can use the Self outliner to drill down on the slots to look at and manipulate those objects specifically if needed.

Hopefully this Freetype support can be used as a base for better looking fonts on Linux. If you are keen to take it further, or have ideas on how to integrate it, I can be contacted using the details at the bottom of my blog, or you can raise it in the Self mailing list or on github.

Tags: self 

2014-12-31

Using Inferno OS on Linux

It’s the end of the year and I’m going through some of the patches I have for projects and putting them online so I have a record of them. I’m working through things I did on Inferno OS.

Source respositories

The official Inferno source is hosted in a mercurial repository on Google Code. There is also a repository containing a snapshot of this with changes to get Inferno building on Android systems on bitbucket. Finally there is a tarball containing a snapshot which includes fonts required by Inferno that are not stored in the mercurial repository.

I carry a couple of custom patches to Inferno and I also have changes to the Android port that I keep in a patch file. When working on these I find it useful to be able to diff between the current Inferno source and the Hellaphone source to see what changes were made.

I’ve cleaned these up and put the patches in a github repository with a conversion of the mercurial repositories to git. The Hellaphone code is a branch of the repository makeing it easy to cherry pick and diff between that and the main source. I’ve put this in github.com/doublec/inferno.

There are four main branches in that repository:

  • 20100120 - Snapshot with the additional fonts
  • hellaphone - Hellaphone Android port
  • inferno - Direct import of the Inferno mercurial repository
  • master - Everything needed for building inferno on Linux. It is a recent working version of the inferno branch with the additional fonts from the 20100120 branch and any additional work in progress patches to work around existing issues.

I use master for running Inferno on desktop and hellaphone for phone development. The README on master explains how to build and the README on hellaphone has the steps to get it running on a phone.

Building on Linux

Building Inferno on Linux with the official source involves downloading an archive containing a snapshot of the source with the addition of some font files that are licensed differently. You then do a mercurial update to get the latest source code. Something like the following performs the build:

$ wget http://www.vitanuova.com/dist/4e/inferno-20100120.tgz
$ tar xvf inferno-20100120.tgz
$ cd inferno
$ hg pull -u
$ ...edit mkconfig so entries are set below...
ROOT=/path/to/inferno
SYSHOST=Linux
OBJTYPE=386
$ export PATH=$PATH:`pwd`/Linux/386/bin
$ ./makemk.sh
$ mk nuke
$ mk install

Building using my github repository requires:

$ git clone https://github.com/doublec/inferno
$ cd inferno
$ sh Mkdirs
$ .. edit `mkconfig` to so the following entries are set to these values...
ROOT=/root/of/the/inferno/git/clone
SYSHOST=Linux
OBJTYPE=386
$ export PATH=$PATH:`pwd`/Linux/386/bin
$ ./makemk.sh
$ mk nuke
$ mk install

Running Inferno

I use a shell script to run Inferno. The contents look like:

export PATH=$PATH:~/inferno/Linux/386/bin
export EMU="-r/home/$USER/inferno -c1 -g1920x1008"
exec emu $* /dis/sh.dis -a -c "wm/wm wm/logon -u $USER"

This starts Inferno using the same username as on the Linux system and with the desktop size set to 1920x1008. The JIT is enabled (the -c1 flag does this). For this to work I need to have a /usr/chris directory in the Inferno file system with some default files. This can be created by copying the existing /usr/inferno directory:

$ cd inferno
$ cp -r usr/inferno usr/chris

The patches in my master branch add a -F command line switch to the emu command to use full screen in X11. This is useful when using a window manager that doesn’t allow switching to and from full screen. On Ubuntu I often run Inferno full screen in a workspace that I can switch back and forth to get between Linux and Inferno. This can be run with the above shell script (assuming it is named inferno and on the path):

$ inferno -F

Accessing the host filesystem and commands

Often I want to access the host filesystem from within Inferno. This can be done using bind with a special path syntax to represent the host system. I like to keep the path on Inferno the same as the path on Linux so when running host commands the paths in error messages match up. This makes it easier to click on error messages and load the file in Inferno text editors. I map /home as follows (‘;’ represents an Inferno shell prompt):

; mkdir /home
; bind '#U*/home' /home

Now all the directories and files under /home in Linux are available under /home in Inferno.

The Inferno os command is used to run host programs from within Inferno. To run git for example:

; os -d /home/username/inferno git status
...output of git...

Note that I had to pass the -d switch and to give the path on the host filesystem that will be the current working directory for the command. I can even do Firefox builds from within Inferno:

; cd /home/username/firefox
; os -d /home/username/firefox ./mach build
...firefox building...

Any errors in source files that appear can be shown in the Inferno acme editor as I’ve mapped /home to be the same on Inferno as on the host.

VNC in Inferno

Sometimes I need to run graphical host commands from within Inferno. Mechiel Lukkien has a large number of useful Inferno programs written in Limbo. One of these is a vnc client. From a source clone of this project it can be installed in Inferno with something like:

; cd vnc
; SYSROOT=Inferno
; ROOT=
; mk install

Start a VNC server from the Linux side:

$ vncpasswd
...set password...
$ vncserver :1

Connect to it from the Inferno side:

; vncv tcp!127.0.0.1!5901

Now you can run Firefox and other useful programs from within Inferno. Inferno has its own web browser, Charon, but it’s nice to be able to use a full browser, terminals, etc to do Linux things from within Inferno when needed. vncv isn’t feature complete - it doesn’t do modifier keys unfortunately, but is still a useful tool.

Mechiel Lukkien has many other useful libraries worth exploring. There is an ssh implementation, a mercurial client, and an interesting ‘irc filesystem’ which I’ve written about before.

Other tips

Acme is one of the editors available in Inferno. It’s also available on other operating systems and Russ Cox has a Tour of Acme video which explains its features. Most of these work in the Inferno port too.

Pete Elmore has an introduction to the Inferno shell and other Inferno posts. The shell has a way of passing data from one shell command to another. For example, du can recursively list all files in the current directory and subdirectories:

; du -an

To pass this list to grep to find the text ‘ASSERT’ within these files:

; grep -n ASSERT `{du -an}
filename.h:100: ASSERT(1)

Limbo

Limbo is the programming language used in Inferno. It is a typed programming language with support for concurrency using channels and lightweight threads. The book Inferno Programming with Limbo is available as a PDF and also makes for a good introduction to Inferno itself. The line between Inferno as an OS and Inferno as a language runtime is a bit blurry at times.

Why Inferno?

Inferno has some interesting ideas with regards to distributing computing power and provides a way to explore some ideas that were in the Plan 9 OS but useable on multple platforms. My post on sharing computer and phone resources gives some examples of ways it could be used. Its lightweight threads and concurrency make for useful programming system for server based tasks.

Inferno is a small hackable OS for learning about operating systems. For more on this the book Principles of Operating Systems by Brian Stuart covers Inferno and Linux. One aspect I was interested in exploring was porting parts of the OS from C to the ATS programming language in a similar manner to what I describe in my preventing heartbleed bugs with safer programming languages post.

More on Inferno is available at:

Tags: inferno 

2014-12-22

Self Benchmarking

I was asked on twitter about the current speed of the Self implementation. The request was for a method send benchmark so I wrote a simple one and compared against Pharo, a Smalltalk implementation.

The implementation and technology behind Self is quite old in comparison to modern compiler implementations but at the time it was state of the art. I hoped it would hold up reasonably well. The test I wrote in Self was:

(|
  doSomething = ( ^self ).
  test = ( |n <- 0|
           [ n < 100000000 ] whileTrue: [ doSomething. n: n + 1 ]
         )
|)

Running this in the Self shell shows:

"Self 1" _AddSlots: ...code snippet from above...
shell
"Self 2" [ test ] time.
2587

2.5 seconds seems a bit slow to me but I tested in Pharo to confirm and to see how it compares. The Pharo code looks almost exactly like the Self code:

doSomething = ^self
test = |count|
       count := 0.
       [ count < 100000000 ] whileTrue: [
         count := count + 1.
         doSomething.
       ].

[ MyObject new test ] timeToRun
  => 0:00:00:00.239

That’s 239ms vs 2,587ms, a factor of over 10x. Further investigation revealed that calling ‘time’ in Self seems to cause the code to run slower. If I call the ‘test’ method first, and then call ‘time’ then it’s much faster:

"Self 2" [ test ] time.
2587
"Self 3" [ test ] time.
2579
"Self 4" test.
nil
"Self 5" [ test ] time.
650
"Self 6" [ test ] time.
628

At 650ms it is about 2.7x slower than Pharo, an improvement over 10x. More investigation is needed to see if there is room for other improvements.

The Self implementation has some primitives that can be changed to show debugging information from the JIT. All primitives can be listed with:

primitives primitiveList do: [ | :e | e printLine ].

Looking through this shows some interesting ones prefixed with _Print that can be set to output debug data. One is _PrintCompiledCode. Setting this to true allows viewing the generated assembler code on the Self console.

"Self 16" _PrintCompiledCode: true.
false
"Self 17" 40 + 2.
...
  // loadOop2
movl $0xa0 (40), #-16(%ebp)
  // loadOop2
movl $0x8 (2), #-20(%ebp)
  // loadArg
movl #-20(%ebp), %ebx
movl %ebx, #4(%esp)
  // selfCall
movl #-16(%ebp), %ebx
movl %ebx, (%esp)
nop
nop
nop
call 0x8186597 <SendMessage_stub> (bp)
  // begin SendDesc
jmp L7f
  .data 3
jmp L9f
  .data 0
  .data 0
  .data 0x4578341 ('+')
  .data 4
L7: 
L8: 
  // end SendDesc
movl %eax, #-16(%ebp)
  // epilogue
movl #-16(%ebp), %eax
  // restore_frame_and_return
leave
ret

Others, like _PrintInlining display debug information related to inlining code.

"Self 18" _PrintInlining: true
fales
"Self 19" test.
*inlining size, cost 0/size 0 (0x8b7e864)
*PIC-type-predicting - (1 maps)
*type-casing -
*inlining - (smallInt.self:153), cost 1/size 0 (0x8b7ee38)*
 *inlining asSmallInteger (number.self:108), cost 1/size 0 (0x8b7fa94)*
  *inlining raiseError, cost 0/size 0 (0x8b80530)*
  *inlining asSmallIntegerIfFail: (smallInt.self:302), cost 0/size 0 (0x8b808fc)*
 *inlining TSubCC:
 *cannot inline value:With:, cost = 10 (rejected)
 *marking value:With: send ReceiverStatic
 *sending value:With:
*sending -
*inlining size:, cost 0/size 0 (0x8b8434c)
*inlining rep, cost 0/size 0 (0x8b846a8)
*PIC-type-predicting removeFirstLink (1 maps)
*type-casing removeFirstLink
*inlining removeFirstLink (list.self:300), cost 2/size 0 (0x8b84b48)*
 *inlining next, cost 0/size 0 (0x8b85628)
 *PIC-type-predicting remove (1 maps)
 *type-casing remove
 *cannot inline remove, cost = 9 (rejected)
 *sending remove
*sending removeFirstLink
*PIC-type-predicting value (1 maps)
*type-casing value
*inlining value, cost 0/size 0 (0x8b86570)*
*sending value
*inlining asSmallInteger (number.self:108), cost 1/size 0 (0x8b7e5b0)*
 *inlining raiseError, cost 0/size 0 (0x8b7f074)*
 *inlining asSmallIntegerIfFail: (smallInt.self:302), cost 0/size 0 (0x8b7f440)*
*inlining TSubCC:
*cannot inline value:With:, cost = 10 (rejected)
*marking value:With: send ReceiverStatic
*sending value:With:
nil

For more involved benchmarks there is some code shipped with the Self source. It can be loaded with:

"Self 28" bootstrap read: 'allTests' From: 'tests'.
reading ./tests/allTests.self...
reading ./tests/tests.self...
reading ./tests/programmingTests.self...
reading ./tests/debugTests.self...
reading ./tests/lowLevelTests.self...
reading ./tests/numberTests.self...
reading ./tests/deltablue.self...
reading ./tests/sicTests.self...
reading ./tests/branchTests.self...
reading ./tests/nicTests.self...
reading ./tests/testSuite.self...
reading ./tests/languageTests.self...
reading ./tests/cons.self...
reading ./tests/benchmarks.self...
reading ./tests/richards.self...
reading ./tests/parser.self...
reading ./tests/parseNodes.self...
modules allTests

There are methods on the bootstrap object for running the tests and printing results. For example:

"Self 32" benchmarks measurePerformance

                 compile    mean       C    mean/C       %
recur:                 5       0
sumTo:                 2       7
sumFromTo:             2       7
fastSumTo:             2       6
nestedLoop:            2      10
...

There is also measurePerformance2 and measurePerformance3 methods. The code comments for the measure2 and measure3 methods explain the differences.

Self 2 was well known for generating very fast code that compared favourably with C. This implementation of this was described in Craig Chamber’s thesis. Compilation was slow however so in Self 3 and 4 two new compilers were created. These were ‘nic’ and ‘sic’. I believe this is covered in Urs Hölzle’s thesis The ‘nic’ compiler is the ‘Non Inlining Compiler’ and is simpler to implement. It’s the compiler you write to get Self bootstrapped and running on new platforms fairly quickly. There is no inlining and no type feedback so performance is slower as shown by the benchmarking when changing the compiler used, as described below. The ‘sic’, or ‘Single Inlining Compiler’, generates better code through more optimisations. While neither is as fast as the Self 2 compiler it is faster to compile code and makes for a better interactive system. You can read more about this in the Merlintec Self FAQ.

There is a defaultCompiler slot in the benchmark object that can be set to nic or sic to compare the different JIT compilers that Self implements. Comparing the ‘nic’ compiler vs the ‘sic’ compiler shows a speedup of about 6x in the ‘richards’ benchmark when using ‘sic’.

There’s probably a fair bit of low hanging fruit to improve run times. I don’t think the x86 backend has had as much work on it as the Sparc or PPC backends. The downside is much of the compiler code is written in C++ so for people interested in ‘Self the language’ it’s not as fun to hack on. Klein was an attempt to write a Self VM in Self and includes a compiler and assembler which might make a more interesting project for those that want to use Self itself to implement compiler code.

Tags: self 

2014-12-18

Using Freenet

I’ve been following the Freenet project for many years, occasionally firing it up and seeing if I can do anything useful with it. I’ve been using it regularly over the last month and it has come a long way since I first tried it. It’s much faster than what it was in the past. This post describes a bit of how I use it and some of the issues I worked around when publishing content.

Overview

There are no dynamic servers on Freenet. No user hosts a site. It’s a data store and users push the data into the content store which then becomes retrievable by anyone with the key. Freenet is essentially a large encrypted distributed hash table.

Nodes set aside an amount of disk space and users choose to store data under a key. Retrieval of the key goes out into the distributed hash table and returns the data associated with it. Inserting data into the store pushes that data out into other nodes and is not generally stored in your own node. Requesting data sends the request over the network and the data migrates to your node to return the data. A scheme is used to enable recovery from data loss. M of N segments of your data can be lost but the full data can still be recovered. The network is lossy in that as more data is inserted, less frequently requested data drops out. Data stored is immutable. Once it is in the store with a key it will always be associated with that data.

Freenet data is requested using keys. There are different types of keys. A high level overview would be:

KSK@somewhere
A KSK key can be chosen by the inserter. The 'somewhere' portion of the key can be any value. This allows generating keys to access data using easier to remember words or phrases. The downside is they can be re-inserted by anyone with different data. What you get when you request the key depends on what data has been inserted by different users under it.
CHK@...
CHK keys have the `...` portion computed based on the hash of the data content. These have the advantage that the key is always the same for the same data. If data for a CHK key has dropped out of the network anyone can 'heal' that data by reinserting the same file. The hash for the CHK key will be the same and the data will become available again under that same key. This is like being able to have any user fix a 404 response on the standard internet by reuploading the same file anywhere.
SSK@...
An SSK key has a unique cryptographically generated hash that is different for any given insert of data. These cannot be 'healed' if the data drops out as a re-insert will have a different key.
USK@.../foo/1
A USK key allows updateable content. Note the number at the end. This increments everytime new data for the key is inserted. When requesting data freenet can look for the highest number available and return that. It's useful for freenet hosted blogs which have reguarly updated content.

Setup

The freenet software really needs to run 24x7 to be effective. I followed the headless install instructions to install on a server machine and access the freenet proxy on my client machines using an SSH tunnel. An SSH commmand like the following sets up local ports that tunnel to the server so they can be accessed locally:

ssh -L 8888:127.0.0.1:8888 -L 8080:127.0.0.1:8080 -L 9481:127.0.0.1:9481 me@myserver.local

The 8888 port is for the freenet proxy software where you access most freenet functionality from the browser. Port 8080 is for the Freenet Message System if you install that and 9481 is for the API interface that jSite uses.

It takes a few hours for a new freenet node to establish itself and get up to speed. Expect much slowness initially. It gets better though.

Social Networking on Freenet

Freenet has some social networking functionality. There is a web of trust for identities, distributed anonymous email, twitter-like microblogging, forums and IRC like chat. How to set these up is described in the Freenet Social Networking Guide. Setting up an identity in the web of trust and Sone for the microblogging will give a good start to using freenet socially.

You can create as many web of trust identities as you want and switch between them for different purposes. I use Freenet non-anonmously and my identity on there is associated with my real world identity but I could also have anonymous ones for other purposes.

Freenet Sites

A freenet site is usually stored under a USK key so it can be updated. Software to insert a directory of HTML as a USK is the easiest way of uploading a site or blog. I use jSite. I mirror this blog to freenet under the key USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/-7. Note the negative number at the end. When requested this results in freenet starting from edition ‘7’ and looking for the most recent published edition from there. Sites can be bookmarked in the freenet proxy and it will automatically look for and update the bookmark when a new edition is inserted.

There were some issues I had to workaround when mirroring my Jekyll based blog. I have absolute links in the blog that references other pages. These don’t work if copied directly to a freenet site as the freenet proxy has the content key as the initial part of the URL. So a link to a page in the proxy looks like /USK@longhash/bluishcoder/7/2014/12/17/changing-attributes-in-self-objects.html. An internal link that starts with / to go to a page will not work as it doesn’t contain the USK key prefix. I tried modifying Jekyll to use relative URLs but wasn’t successful. The approach I ended up taking was to follow the advice in this github issue. My _config.yml file contains these baseurl entries:

baseurl: "file:///some/path/bluishcoder/_site"
#baseurl: /USK@longlonghash/bluishcoder/7
#baseurl: "http://bluishcoder.co.nz"

All my internal links in blog posts have the baseurl prefixed. For example (Remove the backslash - I had to add it to prevent Jekyll from replacing it with the baseurl here):

[link to a video]({\{site.baseurl}}/self/self_comment.webm)

This gets replaced at blog generation time by the baseurl entry in _config.yml. I generate my internet based blog with the relevant baseurl, copy that to my webserver, then generate the freenet based one with the correct baseurl and push that to freenet using jSite. This is a bit tedious but works well. A blog system that uses only relative URLs would be a lot easier as you can just insert the site directly.

Note that freenet sites cannot use JavaScript and some content is filtered out for security reasons. Simple HTML and CSS works best.

Photo heavy sites

I have a site heavy in photos which is a mirror of some photos from my Pitcairn Island trip. This is under key USK@2LK9z-pdZ9kWQfw~GfF-CXKC7yWQxeKvNf9kAXOumU4,1eA8o~L~-mIo9Hk7ZK9B53UKY5Vuki6p4I4lqMQPxyw,AQACAAE/pitcairnisland/3. The interesting problem with photo heavy sites is how best to present the photos while also preventing them from dropping out of the network.

If the main page of the site has thumbnail images and allows the user to see the full image by selecting the thumbnail then the thumbnails tend to stay alive as they are most requested. Unfortunately some of the full images will tend to drop out eventually. A recommended approach by long time freenet users is to link to the full photo in the IMG tag but scale it to thumbnail size. This causes the page to have all the full size images scaled and is slow to load. But all the images stay alive.

I like the fast loading approach of thumbnails though so tried to find a middle ground. Image preloading using CSS seemed like a viable solution but Freenet’s content filter has issues with it. With some tweaking of that this approach would work well. The thumbnails would load for quick viewing and the full images would pre-load without the user noticing that the page is still loading. This should result in most images staying around.

The approach I ended up using was to have a hidden DIV at the end of the page with the full sized images. They don’t display and cause the full size images to be retrieved while the user sits on the main page. The downside is the page still shows that it’s loading which isn’t optimal. I also link to a page that has the full sized images scaled to thumbnail size as a viewing option. Hopefully the issue with the CSS preloading approach can be resolved as that has a better user experience.

Conclusion

Other than mirring my blog and using Sone I haven’t done too much else. There is a ‘bitcoin over freenet’ program that mirrors the blockchain in freenet and allows submitting and retrieving transactions that looks interesting to explore. Freenet would seem to be useful for some things Tor is used for (dissemination of information under oppressive regimes) without the requirement of needing an active server that can be located and attacked.

There’s a great set of PDF slides that cover more about what Freenet can do if you’re interested in looking into it more.

My interest has been more about looking at how freenet can be used as a more encrypted and non-hosted distributed alternative to services like Twitter, Facebook, hosted email and the like. As long as you can put up with higher latency and the different idioms an ‘immutable internet that decays’ requires it seems that this is viable.

I’m curious what other services people could build on top of it.

Tags: freenet 


This site is accessable over tor as hidden service mh7mkfvezts5j6yu.onion, ZeroNet using key 13TSUryi4GhHVQYKoRvRNgok9q8KsMLncq, or Freenet using key:
USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/-10/


Tags

Archives
Links