While I was visiting Pitcairn Island I got to experience what the internet is like for areas where fast broadband speeds aren't available.
I wrote about the internet setup on the island in my post about leaving for the island. I found while I was there that many modern websites just failed to work. I was unable to use Gmail's standard interface. It would almost never complete loading. When it did successfully load it wouldn't work. Requests would time out, and the page would just hang waiting for the results of background HTTP requests. I had to use the fallback 'Basic' HTML interface. This actually worked quite well and was usable.
Facebook was mostly usable but again background HTTP requests would often time out and fail. This would results in parts of the interface becoming unusable. The initial load of the page took minutes. I was rarely able to upload images greater than 80Kb in size - they'd never complete. Facebook's major win was that everything is integrated on the site so I wouldn't have to attempt loading any other major website to do chatting, photo sharing, etc. It'd take all day if I had to visit a dozen sites to do things.
Twitter was painful. Visiting pages with individual tweets took an age to load due to the size of the data being transferred. Sending a tweet out would take a long time with no obvious indication to the user that anything was happening - was it taking a long time or had it just failed?
I found using text based tools running through an SSH connection to a remote server to be more usable than the web browser in some cases.
Most issues I had with sites seemed to be the result of:
- XMLHttpRequest's failing and the site not gracefully handling the failure. The user never gets informed of it and they're left waiting for a long time wondering if things are working.
- Large images on pages. I remember reading a Hacker News thread about WebP where someone commented that an extra 10-20% on image compression doesn't matter in the modern world of high bandwidth availability. Pitcairn has taught me that this isn't true everywhere - every byte counts.
These things can be hard to test for. Simulating low bandwidth/high latency connections is a chore and I doubt many website developers do it. Simulating failure of resources loading is also tricky - it probably never occurs during development so the failure path never gets tested.
I'm curious if a combination of the newer web standards could help in some of these areas. Could the offline application cache be used by sites like Facebook and Twitter to more aggressively cache the non-changing scripts and user interface portions of the site for example.
On the positive side, less time on the internet meant more time enjoying the island.