Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


2015-09-14

Using Freenet for Static Websites

This website is generated from markdown to static HTML and I mirror it on Freenet. Data on Freenet slowly disappears if it is not regularly requested and this happens to parts of the mirror of my blog since many posts have a small target audience and the cross section of Freenet users and that target audience results in a low number of requests.

I've thought about changing the clearnet site so it is a thin proxy in front of a Freenet node and retrieves the data from Freenet. This enables all the clearnet requests to contribute to the healing of the Freenet data. It also means an update to the site on Freenet will automatically be reflected in the clearnet version.

The recent announcement of Neocities mirroring their sites on IPFS prompted me to try this on Freenet to see how viable it was.

I've been able to get something working and this site is now being served directly from the Freenet data with nginx acting as a caching reverse proxy. Performance is acceptable. Taking this approach has a security tradeoff in that I've had to lock down internal node pages that may allow manipulating the node directly. See the end of this post for details on this.

Freenet has an API called FCP that allows retrieval of content programatically. I thought about writing a simple HTTP proxy server that would retrieve the requests from Freenet via FCP and send them back to the requester. I didn't want to invest too much effort into a proof of concept so I looked to see if there are existing tools to do this.

SCGIPublisher is a plugin for Freenet that provides an SCGI interface to Freenet content using a whitelist to expose only the desired data. It expects to be exposing actual Freenet URIs and keys. I want to hide all this behind my standard domain and I couldn't work out how to prevent the filtering of data and rewriting of URLs that it does. An example of SCGIPublish usage is d6.gnutella2.info - it's a proxy that provides access to a number of Freenet sites from clearnet.

Freenet already has a built in proxy, FProxy. It does filtering of the requested data to remove JavaScript and detect potentially malicious file formats. If I could disable this filtering and use nginx as a reverse proxy I'd be able to get what I wanted without writing any code. It turns out this can be disabled by doing the following:

  • In the Freenet node Configuration/Web Interface menu, set "Disable progress page when loading pages?" to false.
  • In the same menu, set "Maximum size of transparent pass-through in the web interface where we cannot show progress" and "Maximum size for transparent pass-through in the web interface where we can show a progress bar" to something higher than the maximum file size of the site you are exposing. Without this the user will receive an HTML page instead of the required content if the content is large.

  • Append "?forcedownload=true" to all requested URLs.

With this setup an nginx reverse proxy can be created that uses the Freenet node web interface as the upstream. Unfortunately setting ?forcedownload=true results in Freenet not sending the mime type for the content so I had to create a lookup table in nginx to compute the mime type. This table looks like:

map $uri $custom_content_type {
     default         "text/html";
     ~(.*\.xml)$  "text/xml";
     ~(.*\.rss)$  "application/rss+xml";
     ~(.*\.png)$  "image/png";
     ~(.*\.gif)$  "image/gif";
     ~(.*\.jpg)$  "image/jpeg";
     ~(.*\.pdf)$  "application/pdf";
     ..etc...
 }

In the server section of the configuration I set up some high timeout values to cater for the inital slowness of the freenet node. I intercept the 404 and 500 error pages to display some static HTML error messages. This stops the Freenet proxy error pages from having internal links allowing doing things on the node.

server {
    listen       80;
    server_name proxy.example.com;

    proxy_intercept_errors on;
    error_page 404 /404.html;
    error_page 500 /500.html;
    proxy_connect_timeout 300;
    proxy_send_timeout 300;
    proxy_read_timeout 300;
    send_timeout 300;
    try_files $uri $uri/index.html;
    location /404.html {
        root /var/www/html/;
        allow all;
        internal;
    }
    location /500.html {
        root /var/www/html/;
        allow all;
        internal;
    }
    ... location blocks ...
}

Following that comes the location blocks. These are hardcoded for the Freenet keys being exposed to prevent the proxy being used to browse any Freenet site. I've shortened the actual key below with .... to keep the example short.

This block hides headers returned by the Freenet proxy, adds the ?forcedownload=true query parameter and sets the proxy_pass to go to the Freenet node with the hardcoded key.

location /freenet/USK@..../bluishcoder/ {
        proxy_intercept_errors on;

        index index.html;
        proxy_redirect
            ~^/freenet:USK@..../bluishcoder/(?<edition>[0-9]+)/(.*)\?forcedownload=true$
            /freenet/USK@..../bluishcoder/$edition/$2;
        set $args forcedownload=true;
        proxy_hide_header Content-Type;
        proxy_hide_header Content-Transfer-Encoding;
        proxy_hide_header Content-Disposition;
        proxy_hide_header X-Content-Type-Options;
        proxy_hide_header X-Content-Security-Policy;
        proxy_hide_header X-Webkit-Csp;
        proxy_hide_header Content-Security-Policy;
        proxy_hide_header Cache-Control;
        proxy_hide_header Pragma;
        add_header Content-Type $custom_content_type;
        error_page 301 302 307 =200 @redir;

        proxy_pass http://127.0.0.1:8888/USK@..../bluishcoder/;
}

USK keys have an edition number that is incremented every time a site is updated. I bookmark the USK key in the node which results in it subscribing for updates and it will automatically pick up the latest edition, even if a request in the nginx configuration file is coded to use a specific edition number.

I don't include the USK edition number in the proxy_pass request here to make handling edition updates easier. If a request is made for an edition where a later one is available the node will send a 301 redirect. The Location header in the redirect is of the format freenet:USK@.../bluishcoder/edition/.... The edition is a numeric value of the latest edition number. The proxy_redirect statement rewrites this into the URL format that our location block uses.

At this point the error_page statement is hit which converts the 301 location moved response to a 200 (HTTP OK response) and passes it to a redir block:

location @redir {
  set $args forcedownload=true;
  proxy_hide_header Content-Type;
  proxy_hide_header Content-Transfer-Encoding;
  proxy_hide_header Content-Disposition;
  proxy_hide_header X-Content-Type-Options;
  proxy_hide_header X-Content-Security-Policy;
  proxy_hide_header X-Webkit-Csp;
  proxy_hide_header Content-Security-Policy;
  proxy_hide_header Cache-Control;
  proxy_hide_header Pragma;
  add_header Content-Type $custom_content_type;

  set $foo $upstream_http_location;
  proxy_pass http://127.0.0.1:8888$foo;
}

This block saves the original upstream HTTP Location header (The rewritten one in our own URL format) and passes it back to the proxy to get the data. In this way updated USK editions are handled even though requests are made for earlier editions.

With this in place we have a Freenet proxy for whitelisted URLs that work with requests lke http://proxy.example.com/freenet/USK@.../bluishcoder/17/.... Putting another reverse proxy in front of this allows standard clearnet URLs to be used:

server {
    listen       80;
    server_name example.com;

    proxy_intercept_errors on;
    error_page 404 /404.html;
    error_page 500 /500.html;
    proxy_connect_timeout 300;
    proxy_send_timeout 300;
    proxy_read_timeout 300;
    send_timeout 300;
    try_files $uri $uri/index.html;
    location /404.html {
        root /var/www/html/;
        allow all;
        internal;
    }
    location /500.html {
        root /var/www/html/;
        allow all;
        internal;
    }

    location / {
        index index.html;
        proxy_pass http://proxy.example.com/freenet/USK@..../bluishcoder/17/;
    }
}

Now requests to http://example.com proxy to the Freenet USK and avoids the internal filtering and progress pages.

I've got this site running with this setup and it works pretty well. I use nginx proxy caching to cache requests for a period of time to ease the load on the Freenet node. There are some rough edges. The first request for a large file, video for example, takes a while as it has to download the entire video - it can't stream it. Once it is cached by nginx then streaming to other requests is fine.

Taking this approach got things working quickly but I think in the long term it would be better to take the approach of writing a proxy that utilizes FCP as described earlier. This would enable using the mime type that Freenet knows for the files, avoiding the manual table in the nginx configuration, and avoids any possible security issues from accidentally leaking internal Freenet proxy pages. It helps prove the approach as viable however.

Security Tradeoffs

If an internal node page somehow became available to the user they may be able to access any Freenet URL, download files to any location on the machine and upload any files on the machine. They can reconfigure the node and create connections to other nodes.

To restrict the damage they can do I've changed the following settings on the node:

  • In Configuration/Core Settings change the "Directories downloading is allowed to" to empty. This prevents the node being used to download files to disk.
  • In Configuration/Core Settings change the "Directories uploading is allowed from" to empty. This prevents the node being used to upload files.
  • In Configuraiton/Web Interface change "Public Gateway Mode" to true. This will prevent the user from being able to change any node settings. You should configure an IP address for an admin user to access the node with full administation settings. Optionally, and what I do, is enable it for all users. If I want to temporarily administer the node I shut it down from a shell, edit freenet.ini to change fproxy.publicGatewayMode to false, and restart.

These issues would go away if a proxy that uses FCP, or a Freenet plugin that does similar, was created.

Related Links

Some links that provide related information on Freenet are:

Tags


This site is accessable over tor as hidden service 6vp5u25g4izec5c37wv52skvecikld6kysvsivnl6sdg6q7wy25lixad.onion, or Freenet using key:
USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/-61/


Tags

Archives
Links