Bluish Coder

Programming Languages, Martials Arts and Computers. The Weblog of Chris Double.


2015-03-24

Contributing to Servo

Servo is a web browser engine written in the Rust programming language. It is being developed by Mozilla. Servo is open source and the project is developed on github.

I was looking for a small project to do some Rust programming and Servo being written in Rust seemed likely to have tasks that were small enough to do in my spare time yet be useful contributions to the project. This post outlines how I built Servo, found issues to work on, and got them merged.

Preparing Servo

The Servo README has details on the pre-requisites needed. Installing the pre-requisites and cloning the repository on Ubuntu was:

$ sudo apt-get install curl freeglut3-dev \
   libfreetype6-dev libgl1-mesa-dri libglib2.0-dev xorg-dev \
   msttcorefonts gperf g++ cmake python-virtualenv \
   libssl-dev libbz2-dev libosmesa6-dev 
...
$ git clone https://github.com/servo/servo

Building Rust

The Rust programming language has been fairly volatile in terms of language and library changes. Servo deals with this by requiring a specific git commit of the Rust compiler to build. The Servo source is periodically updated for new Rust versions. The commit id for Rust that is required to build is stored in the rust-snapshot-hash file in the Servo repository.

If the Rust compiler isn’t installed already there are two options for building Servo. The first is to build the required version of Rust yourself, as outlined below. The second is to let the Servo build system, mach, download a binary snapshot and use that. If you wish to do the latter, and it may make things easier when starting out, skip this step to build Rust.

$ cat servo/rust-snapshot-hash
d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf/rustc-1.0.0-dev
$ git clone https://github.com/rust-lang/rust
$ cd rust
$ git checkout -b servo d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf
$ ./configure --prefix=/home/myuser/rust
$ make
$ make install

Note that I configure Rust to be installed in a directory off my home directory. I do this out of preference to enable managing different Rust versions. The build will take a long time and once built you need to add the prefix directories to the PATH:

$ export PATH=$PATH:/home/myuser/rust/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/myuser/rust/lib

Building Servo

There is a configuration file used by the Servo build system to store information on what Rust compiler to use, whether to use a system wide Cargo (Rust package manager) install and various paths. This file, .servobuild, should exist in the root of the Servo source that was cloned. There is a sample file that can be used as a template. The values I used were:

[tools]
system-rust = true
system-cargo = false

[build]
android = false
debug-mozjs = false

If you want to use a downloaded binary snapshot of Rust to build Servo you should set the system-rust setting to false. With it set to true as above it will expect to find a Rust of the correct version in the path.

Servo uses the mach command line interface that is used to build Firefox. Once the .servobuild is created then Servo can be built with:

$ ./mach build

Servo can be run with:

$ ./mach run http://bluishcoder.co.nz

To run the test suite:

$ ./mach test

Finding something to work on

The github issue list has three useful labels for finding work. They are:

For my first task I searched for E-easy issues that were not currently assigned (using the C-assigned label). I commented in the issue asking if I could work on it and it was then assigned to me by a Servo maintainer.

Submitting the Fix

Fixing the issue involved:

  • Fork the Servo repository on github.
  • Clone my fork localling and make the changes required to the source in a branch I created for the issue I was working on.
  • Commit the changes locally and push them to my fork on github.
  • Raise a pull request for my branch.

Raising the pull request runs a couple of automated actions on the Servo repository. The first is an automated response thanking you for the changes followed by a link to the external critic review system.

Reviews

The Servo project uses the Critic review tool. This will contain data from your pull request and any reviews made by Servo reviewers.

To address reviews I made the required changes and committed them to my local branch as seperate commits using the fixup flag to git commit. This associates the new commit with the original commit that contained the change. It allows easier squashing later.

$ git commit --fixup=<commit id of original commit>

The changes are then pushed to the github fork and the previously made pull request is automatically updated. The Critic review tool also automatically picks up the change and will associate the fix with the relevant lines in the review.

With some back and forth the changes get approved and a request might be made to squash the commits. If fixup was used to record the review changes then they will be squashed into the correct commits when you rebase:

$ git fetch origin
$ git rebase --autosquash origin/master

Force pushing this to the fork will result in the pull request being updated. When the reviewer marks this as r+ the merge to master will start automatically, along with a build and test runs. If test failures happen these get added to the pull request and the review process starts again. If tests pass and it merges then it will be closed and the task is done.

A full overview of the process is available on the github wiki under Github and Critic PR handling 101.

Conclusion

The process overhead of committing to Servo is quite low. There are plenty of small tasks that don’t require a deep knowledge of Rust. The first task I worked on was basically a search/replace. The second was more involved, implementing view-source protocol and text/plain handling. The latter allows the following to work in Servo:

$ ./mach run view-source:http://bluishcoder.co.nz
$ ./mach run http://cd.pn/plainttext.txt

The main issues I encountered working with Rust and Servo were:

  • Compiling Servo is quite slow. Even changing private functions in a module would result in other modules rebuilding. I assume this is due to cross module inlining.
  • I’d hoped to get away from intermittent test failures like there are in Gecko but there seems to be the occasional intermittent reftest failure.

The things I liked:

  • Very helpful Servo maintainers on IRC and in github/review comments.
  • Typechecking in Rust helped find errors early.
  • I found it easier comparing Servo code to HTML specifications and following them together than I do in Gecko.

I hope to contribute more as time permits.

Tags: mozilla  rust  servo 

2015-03-03

Firefox Media Source Extensions Update

This is an update on some recent work on the Media Source Extensions API in Firefox. There has been a lot of work done on MSE and the underlying media framework by Gecko developers and this update just covers some of the telemetry and exposed debug data that I’ve been involved with implementing.

Telemetry

Mozilla has a telemetry system to get data on how Firefox behaves in the real world. We’ve added some MSE video stats to telemetry to help identify usage patterns and possible issues.

Bug 1119947 added information on what state an MSE video is in when the video is unloaded. The intent of this is to find out if users are exiting videos due to slow buffering or seeking. The data is available on telemetry.mozilla.org under the VIDEO_MSE_UNLOAD_STATE category. This has five states:

0 = ended, 1 = paused, 2 = stalled, 3 = seeking, 4 = other

The data provides a count of the number of times a video was unloaded for each state. If a large number of users were exiting during the stalled state then we might have an issue with videos stalling too often. Looking at current stats on beta 37 we see about 3% unloading on stall with 14% on ended and 57% on other. The ‘other’ represents unloading during normal playback.

Bug 1127646 will add additional data to get:

  • Join Latency - time between video load and video playback for autoplay videos
  • Mean Time Between Rebuffering - play time between rebuffering hiccups

This will be useful for determining performance of MSE for sites like YouTube. The bug is going through the review/comment stage and when landed the data will be viewable at telemetry.mozilla.org.

about:media plugin

While developing the Media Source Extensions support in Firefox we found it useful to have a page displaying internal debug data about active MSE videos.

In particular it was good to be able to get a view of what buffered data the MSE JavaSript API had and what our internal Media Source C++ code stored. This helped track down issues involving switching buffers, memory size of resources and other similar things.

The internal data is displayed in an about:media page. Originally the page was hard coded in the browser but :gavin suggested moving it to an addon. The addon is now located at https://github.com/doublec/aboutmedia. That repository includes the aboutmedia.xpi which can be installed directly in Firefox. Once installed you can go to about:media to view data on any MSE videos.

To test this, visit a video that has MSE support in a nightly build with the about:config preferences media.mediasource.enabled and media.mediasource.mp4.enabled set to true. Let the video play for a short time then visit about:media in another tab. You should see something like:

https://www.youtube.com/watch?v=3V7wWemZ_cs
  mediasource:https://www.youtube.com/6b23ac42-19ff-4165-8c04-422970b3d0fb
    currentTime: 101.40625
    SourceBuffer 0
      start=0 end=14.93043
    SourceBuffer 1
      start=0 end=15

    Internal Data:
      Dumping data for reader 7f9d85ef1800:
        Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
          Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
          Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
        Dumping Video Track Decoders - mLastVideoTime: 7.000000
          Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
          Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914

The first portion of the displayed data shows the JS API video of the data buffered:

currentTime: 101.40625
  SourceBuffer 0
    start=0 end=14.93043
  SourceBuffer 1
    start=0 end=15

This shows two SourceBuffer objects. One containing data from 0-14.9 seconds and the other 0-15 seconds. One of these will be video data and the other audio. The currentTime attribute of the video is 101.4 seconds. Since there is no buffered data for this range the video is likely buffering. I captured this data just after seeking while it was waiting for data from the seeked point.

The second portion of the displayed data shows information on the C++ objects implementing media source:

Dumping data for reader 7f9d85ef1800:
  Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
    Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
    Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
  Dumping Video Track Decoders - mLastVideoTime: 7.000000
    Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
    Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914

A reader is an instance of the MediaSourceReader C++ class. That reader holds two SourceBufferDecoder C++ instances. One for audio and the other for video. Looking at the video decoder it has two readers associated with it. These readers are instances of a derived class of MediaDecoderReader which are tasked with the job of reading frames from a particular video format (WebM, MP4, etc).

The two readers each have buffered data ranging from 0-10 seconds and 10-15 seconds. Neither are ‘active’. This means they are not currently the video stream used for playback. This will be because we just started a seek. You can view how buffer switching works by watching which of these become active as the video plays. The size is the amount of data in bytes that the reader is holding in memory. mLastVideoTime is the presentation time of the last processed video frame.

MSE videos will have data evicted as they are played. This size threshold for eviction defaults to 75MB and can be changed with the media.mediasource.eviction_threshold variable in about:config. When data is appended via the appendBuffer method on a SourceBuffer an eviction routine is run. If data greater than the threshold is held then we start removing portions of data held in the readers. This will be noticed in about:media by the start and end ranges being trimmed or readers being removed entirely.

This internal data is most useful for Firefox media developers. If you encounter stalls playing videos or unusual buffer switching behaviour then copy/pasting the data from about:media in a bug report can help with tracking the problem down. If you are developing an MSE player then the information may also be useful to find out why the Firefox implementation may not be behaving how you expect.

The source of the addon is on github and relies on a chrome only debug method, mozDebugReaderData on MediaSource. Patches to improve the data and functionality are welcome.

Status

Media Source Extensions is still in progress in Firefox and can be tested on Nightly, Aurora and Beta builds. The current plan is to enable support limited to YouTube only in Firefox 37 on Windows and Mac OS X for MP4 videos. Other platforms, video formats and wider site usage will be enabled in future versions as the implementation improves.

To track work on the API you can follow the MSE bug in Bugzilla.

Tags: mozilla 

2015-02-19

Spawning Windows Commands in Wasp Lisp and MOSREF

It’s been a while since I last wrote about MOSREF and Wasp Lisp. MOSREF is the secure remote injection framework written in Wasp Lisp. It facilitates penetration testing by enabling a console node to spawn drone nodes on different machines. The console handles communication between nodes and can run lisp programs on any node.

The console can execute programs on other nodes with the input and output redirected to the console. One use for this is to create remote shells. MOSREF uses the Wasp Lisp function spawn-command. The implementation for this in Linux is fairly small and simple. On Windows drones it’s somewhat more difficult. It’s not implemented in current Wasp Lisp and attempting to use the sh command in MOSREF or the spawn-command function in Lisp fails with an error.

I’ve been meaning to try implementing this for quite a while and finally got around to it recently. I’m doing the work in the win_spawn branch of my github fork of WaspVM. With that version of Wasp Lisp and MOSREF built with Windows and Linux stubs available you can spawn Windows commands and capture the output:

>> (define a (spawn-command "cmd.exe /c echo hi"))
:: [win32_pipe_connection 5179A0]
>> (wait a)
:: "hi\r\n"
>> (wait a)
:: close

Bidirectional communication works too:

>> (define a (spawn-command "cmd.exe"))
:: [win32_pipe_connection 517770]
>> (wait a)
:: "Microsoft Windows ..."
>> (send "echo hi\n" a)
:: [win32-pipe-output 517748]
>> (wait a)
:: "echo hi\nhi\r\n\r\nE:\l>"
>> (send "exit\n" a)
:: [win32-pipe-output 517748]
>> (wait a)
:: "exit\n"
>> (wait a)
:: close

With that implemented and some minor changes to MOSREF to remove the check for Windows you can interact with remote Windows nodes. I made a quick video demonstrating this. There is no sound but it shows a linux console on the left and a windows shell on the right running in a VM.

I create a Windows drone and copy it to a location the Windows VM can access using the MOSREF cp command. This actually copies from the console where the drone was create to another Linux drone called tpyo. The Windows VM is running on the machine running tpyo and access the drone executable. This is run in the VM to connect to the console.

Once connected I run a few Lisp commands on the Windows node. The lisp is compiled to bytecode on the console, and the bytecode is shipped to the drone where it executes. The result then goes back to the console. This is all normal MOSREF operation and works already, I just do it to ensure things are working correctly.

Next I run a sh command which executes the command in the windows VM with the result sent back to view on the console. Then I do a typo which breaks the connection because of a bug in my code, oops. I recover the drone, reconnect, and run a remote shell like I originally intended. This spawning of commands on Windows is the new code I have implemented.

The video is available as mosref.webm, mosref.mp4 or on YouTube.

The implementation on Windows required a bunch of Win32 specific code. I followed an MSDN article on redirecting child process output and another on spawning console processes. This got the basic functionality working pretty quickly but hooking it into the Wasp Lisp event and stream systems took a bit longer.

Wasp uses libevent for asynchronous network and timer functionality. I couldn’t find a way for this to be compatible with the Win32 HANDLE’s that result from the console spawning code. I ended up writing derived connection, input and output Wasp VM classes for Win32 pipes that used Win32 Asynchronous RPC callbacks to avoid blocking reads. My inspiration for this was the existing Wasp routines to interact with the Win32 console used by the REPL.

A connection is basically a bidirectional stream where you can obtain an input and an output channel. A wait on an input channel receives data and a send on the output channel transmits data. When wait is called on the input channel a callback is invoked which should do the read. This can’t block otherwise all Wasp VM coroutines will stop running. The callback instead sets a Win32 event which notifies a thread to read data and post the result back to the main thread via an asynchronous procedure call. A send on the output channel invokes another callback which does the write. Although this can technically block if the pipe buffers are full I currently call Write directly.

The Wasp VM scheduler has code that checks if there are any active processes running and can do a blocking wait on libevent for notification to prevent spinning a polling loop. This had the side effect of preventing the asynchronous procedure call from running as Windows only executes it at certain control points. I had to insert a check that although our reading process was de-scheduled waiting for the APC, it was in fact still around and needed the event loop to spin so a call to SleepEx occurs for the APC to run.

I’m still working on testing and debugging the implementation but it works pretty well as is. Before I submit a pull request I want to clean up the code a bit and maybe combine some of the duplicate functionality from the console handling code and the pipe code. I also need to check that I’m cleaning up resources correctly, especially the spawned reading/APC handling threads.

Some minor changes were needed to other parts of Wasp Lisp and those commits are in the github repository. They involve environment variable handling on Windows. First I had to enable the support for it, and then change it so on Windows the environment names were all uppercase. This avoided issues with Wasp looking for commands in PATH vs the Path that was set on my machine. On Windows these are case insensitive.

For building a Windows compatible Wasp VM stub and REPL I used a cross compiler on Linux. I used the gcc-mingw-w64 package in Ubuntu for this. Build wasp VM with:

$ ./configure --host=i686-w64-mingw32
$ OS=MINGW32 CC=i686-w64-mingw32-gcc make

This puts the Windows stub in the stubs directory. I copied this to the stubs directory of the node running the console so it could generate Windows drones. I had to build libevent for Windows using the same cross compiler and tweak the Wasp Makefile.cf to find it. Removing the -mno-cygwin flag was needed as well. I’ll do patches to have the makefile work for cross compilation without changes if no one else gets to it.

Tags: waspvm 

2015-01-15

Decentralized Websites with ZeroNet

ZeroNet is a new project that aims to deliver a decentralized web. It uses a combination of bittorrent, a custom file server and a web based user interface to do this and manages to provide a pretty useable experience.

Users run a ZeroNet node and do their web browsing via the local proxy it provides. Website addresses are public keys, generated using the same algorithm as used for bitcoin addresses. A request for a website key results in the node looking in the bittorrent network for peers that are seeding the site. Peers are selected and ZeroNet connects to the peer directly to a custom file server that it implements. This is used to download the files required for the site. Bittorrent is only used for selecting peers, not for the site contents.

Once a site is retrieved the node then starts acting as a peer serving the sites content to users. The more users browsing your site, the more peers become available to provide the data. If the original site goes down the remaining peers can still serve the content.

Site updates are done by the owner making changes and then signing these changes with the private key for the site address. It then starts getting distributed to the peers that are seeding it.

Browsing is done through a standard web browser. The interface uses Websockets to communicate with the local node and receive real time information about site updates. The interface uses a sandboxed iframe to display websites.

Running

ZeroNet is open source and hosted on github. Everything is done through the one zeronet.py command. To run a node:

$ python zeronet.py
...output...

This will start the node and the file server. A check is made to see if the file server is available for connections externally. If this fails it displays a warning but the system still works. You won’t seed sites or get real time notification of site updates however. The fix for this is to open port 15441 in your firewall. ZeroNet can use UPNP to do this automatically but it requires a MiniUPNP binary for this to work. See the --upnpc command line switch for details.

The node can be accessed from a web browser locally using port 43110. Providing a site address as the path will access a particular ZeroNet site. For example, 1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr is the main ‘hello’ site that is first displayed. To access it you’d use the URL http://127.0.0.1:43110/1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr.

Creating a site

To create a site you first need to shut down your running node (using ctrl+c will do it) then run the siteCreate command:

$ python zeronet.py siteCreate
...
- Site private key: ...private key...
- Site address: ...site address...
...
- Site created!

You should record the private key and address as you will need them when updating the site. The command results in a data/address directory being created, where ‘address’ is the site address that siteCreate produced. Inside that is a couple of default files. One of these, content.json, contains JSON data listing the files contained within the site and signing information. This gets updated automatically when you sign your site after doing updates. If you edit the title key in this file you can give your site a title that appears in the user interface instead of the address.

Another flie that gets modified during this site creation process is the sites.json file in the data directory. It contains the list of all the sites and some metadata about them.

If you visit http://127.0.0.1:43110/siteaddress in your browser, where siteaddress is the address created with siteCreate, then you’ll see the default website that is created. If your node is peering successfully and you access this address from another node it will download the site, display it, and start seeding it. This is how the site data spreads through the network.

Updating a site

To change a site you must first store your files in the data/siteaddress directory. Any HTML, CSS, JavaScript, etc can be put here. It’s like a standard website root directory. Just don’t delete the config.json file that’s there. Once you’ve added, modified or removed files you run the siteSign command:

$ python zeronet.py siteSign siteaddress
- Signing site: siteaddress...
Private key (input hidden):

Now you enter the private key that was displayed (and hopefully you saved) when you ran siteCreate. The site gets signed and information stored in config.json. To publish these changes to peers seeding the site:

$ python zeronet.py sitePublish siteaddress
...publishes to peers...

If your node is running it will serve the files from the running instance. If it is not then the sitePublish command will continue running to serve the files.

Deleting a site

You can pause seeding a site from the user interface but you can’t delete it. To do that you must shutdown the node and delete the sites data/siteaddress directory manually. You will also need to remove its entry from data/sites.json. When you restart the node it will no longer appear.

Site tips

Because the website is displayed in a sandboxed iframe there are some restrictions in what it can do. The most obvious is that only relative URLs work in anchor elements. If you click on an absolute URL it does nothing. The sandboxed iframe has the allow-top-navigation option which means you can link to external pages or other ZeroNet sites if you use the target attribute of the anchor element and set it to _top. So this will work:

<a href="http://bluishcoder.co.nz/" target="_top">click me</a>

But this will not:

<a href="http://bluishcoder.co.nz/">click me</a>

Dynamic websites are supported, but requires help using centralized services. The ZeroNet node includes an example of a dynamic website called ‘ZeroBoard’. This site allows users to enter a message in a form and it’s published to a list of messages which all peering nodes will see. It does this by posting the message to an external web application that the author runs on the standard internet. This web app updates a file inside the sites ZeroNet directory and then signs it. The result is published to all peers and they automatically get the update through the Websocket interface.

Although this works it’s unfortunate the it relies on a centralized web application. The ZeroNet author has posted that they are looking at decentralized ways of doing this, maybe using bitmessage or some other system. Something involving peer to peer WebRTC would be interesting.

Conclusion

ZeroNet seems to be most similar to tor, i2p or freenet. Compared to these it lacks the anonymity and encryption aspects. But it decentralizes the site content which tor and i2p don’t. Freenet provides decentralization too but does not allow JavaScript in sites. ZeroNet does allow JavaScript but this has the usual security and tracking concerns.

Site addresses are in the same format as bitcoin addresses. It should be possible to import the private key into bitcoin and then bitcoins sent to the public address of a site would be accessed by the site owner. I haven’t tested this but I don’t see why it couldn’t be made to work. Maybe this could be leveraged somehow to enable a web payment method.

ZeroNet’s lack of encyption or obfuscation of the site contents could be a problem. A peer holds the entire site in a local directory. If this contains malicious or illegal content it can be accidentally run or viewed. Or it could be picked up in automated scans and the user held responsible. Even if the site originally had harmless content the site author could push an update out that contains problematic material. That’s a bit scary.

It’s early days for the project and hopefully some of these issues can be addressed. As it is though it works well, is very useable, and is an interesting experiement on decentralizing websites. Some links for more information:

Tags: mozilla  zeronet 

2015-01-08

Improving Linux Font Support in Self

In the Linux version of the Self programming language implementation the fonts used are standard X11 fonts. On modern Linux systems these don’t look great and a common question asked in the mailing list is how to improve it. Fonts on the Mac OS X build of Self use a different system and they look much better. It would be good to convert the Linux version to use freetype to gain more control over fonts.

I worked on adding Freetype support a couple of years ago and wrote about it on the mailing list. I haven’t done much on it since then but the code is in my github repository under the linux_fonts branch. That work adds Self primitives to access Freetype but does not integrate it into the Self font system. I hope to be able to continue this work sometime but I’m unlikely to get to it in the near future. This post is to point to the code, show how to use the primitives, in case someone else would like to take it forward as a project.

To try the code out you’ll need to build Self in the usual manner but using that branch. Once the desktop is launched I test it by creating an object with the following slots:

(| parent* = traits oddball.
   window <- desktop w anyWindowCanvas.
   draw <-- nil.
   font <- nil.
   xftcolor <- nil.
   xrcolor <- nil.
 |)

With an outliner for this object I create an evaulator for it and, one at a time, evaluate the following code snippets:

draw: window display xftDrawCreate: window platformWindow
  Visual: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen

xftcolor: xlib xftColor new.
xrcolor: xlib xRenderColor new.
xrcolor alpha: 16rffff

window display xftColorAllocValue: window display screen defaultVisualOfScreen
  Colormap: window display screen defaultColormapOfScreen
  RenderColor: xrcolor
  XftColor: xftcolor

font: window display xftFontOpenNameOnScreen: window display screen number
 Name: 'verdana-18'

draw xftDrawString8: xftcolor
  Font: font X: 100 Y: 100
  String: 'Hello World!'

The results in the text ‘Hello World!’ appearing at position 100@100 on the desktop in the Verdana font. The video below demonstrates this to show how it’s done in the Self user interface:

This is a common workflow I use to prototype things in Self. I create an empty object and populate it with slots to hold data. With an evaluator created for that object these slots are accessible without needing to have an object to call them on. This is known as ‘implicit self calls’. The message for the slot implicitly is sent to the current object. I create and remove slots as needed. I can use the Self outliner to drill down on the slots to look at and manipulate those objects specifically if needed.

Hopefully this Freetype support can be used as a base for better looking fonts on Linux. If you are keen to take it further, or have ideas on how to integrate it, I can be contacted using the details at the bottom of my blog, or you can raise it in the Self mailing list or on github.

Tags: self 


This site is accessable over tor as hidden service mh7mkfvezts5j6yu.onion, or Freenet using key:
USK@1ORdIvjL2H1bZblJcP8hu2LjjKtVB-rVzp8mLty~5N4,8hL85otZBbq0geDsSKkBK4sKESL2SrNVecFZz9NxGVQ,AQACAAE/bluishcoder/-14/


Tags

Archives
Links