How Stacksmith handles References

HyperTalk is designed to never crash. That is, barring a bug in HyperCard itself, or a bug in a native-code plugin, any code that a scripter writes should at worst bring up an error message and abort execution of the current event handler, dropping you back into the main event loop.

This sounds simple at first, but becomes a bit of a problem if you actually are the one implementing a HyperTalk interpreter, like I did for Stacksmith’s Hammer programming language.

Scripters can write things like

on mouseUp
  delete me
  set the name of me to "I'm gone"
end mouseUp

If you implement that naively, you implement me as being simply a pointer to the object the script belongs to. But what do you do when it goes away? You could employ reference counting and just prevent the object from going away as long as a script is running, but then you’ve just punted the problem one level up. If a button deletes the window it sits in, the button (running the script) would stay, because the script holds on to it, but if you then ask the button for its window, you’d still not be able to get a valid object pointer.

You really want to have an object know who is referencing it, so it can just set those references to NULL. Then you make your code check for NULL returns and abort early. But that’s a lot of housekeeping data and maintenance overhead, especially if you have an object that is referenced many times (like me during the course of a script).

So what Stacksmith does is it adapts a handle approach. References to objects are stored as indexes into a master pointer table. Each entry consists of the actual pointer and a generation number, the seed.

Whenever you reference another object, you keep that seed and the index, and fill out a new such entry in the global array of references. You write the pointer to the actual object into it and increase the seed by one.

To retrieve such a referenced object, you find its entry in the array of master pointers, compare that the seed is the same as the one you saved, and if it is, copy the pointer.

Why the seed? Well, you see, for this to work without crashing, once created, a master pointer entry must stay around for the life of the program, so we want to be able to re-use them. So, what happens when an object goes away? Well, it knows about its master pointer and sets it to NULL.

Now, when we look for an unused master pointer, that’s what we look for: An entry in that table that is NULL. We increment its seed, and stick our pointer in it. If some code that referenced the old object comes around now and tries to access the pointer, it still finds valid memory (so no crash), however, when it compares the seed to the one it stored, it realizes that it doesn’t match (indicating that the object has been deleted and the slot re-used), and just returns NULL.

This behaviour goes pretty well with the performance characteristics of most programs:

  • In the common case, when the object is still around, the only penalty we incur is one additional pointer de-reference plus an int comparison.
  • In the uncommon case where an object has gone away, it is just as fast.
  • When an object is deleted, it simply sets one pointer to NULL, and everybody who still references it lazily finds out if they try to access it, is none the wiser if no access ever happens.

The penalties we pay here occur due to a bit less locality when accessing referenced memory, and when creating a reference to an object:

  • Our reference is larger, it stores a seed *in addition to* the actual pointer.
  • The first time a reference is created, we need to find an empty slot in the table of master pointers, currently via linear search.
  • If our table is full, we need to increase the table’s size and allocate a new block of master pointers, which now has to stay permanently resident in memory. While a pointer and seed only uses 8 bytes, this still means this memory ends up not going down after our peak memory usage.

Now, in Stacksmith, there are several ways this is (or could be) optimized:

  • The master pointer table is per “script context group”, so roughly per project. If you close a project, we know that there are no more scripts using this particular table, and we can free the memory.
  • We can remember the last master pointer entry we used, and just start our search from there, so in most cases our “linear search” will just find the next empty entry.
  • Every object keeps track of its reference (so it can set it to NULL when it is deleted). So when a second reference to the same object is created, we can just ask the object to give us its master pointer and seed, without having to scan the master pointer table for an empty slot.
  • Since our objects are reference-counted in addition to using this master pointer scheme, for some uses we can just retain the object instead of taking out a reference.

While originally designed for the above use case, these references have become a useful facility for avoiding all kinds of lifetime issues across the language, and will likely also come in handy when adding reference parameters to function calls as well. After all, these references do not care what data type they point to. As long as you keep the seed and properly deregister, you can store any pointer in such a reference. Even to a platform-native object.

Auto Layout: How to do percentage-based layouts

I recently had to implement a two-directional slider (I.e. a box with an indicator that can go anywhere). I wanted to do it using modern auto layout, and I needed it to resize properly on rotation without me having to change internal variables.

That meant that the position of the slider knob would have to be specified as a percentage (well, fraction) in the multiplier of the constraints, so that whatever size the slider UIView had, it would adapt.

My first attempt was to simply specify the position as a multiple of the height. So [self.indicatorView.centerXAnchor.constrainToAnchor: self.widthAnchor multiplier: 0.0].active = YES would be left, and multiplier: 1.0 would be right (and analogously for the Y direction).

That worked fine, but had the problem that the indicator could end up “tucked under” the left or right edges. I tried using UIView‘s layoutInsets, but that didn’t work either. In the end, I would have to manually add the fraction that my indicator’s width corresponded to to the ends to avoid that from happening. Might as well use pixels, then.

The autolayout margin guides

Then I remembered I could just add additional UILayoutGuides (MacOS has NSLayoutGuides) to define the margins I wanted to keep clear, then define another layout guide relative to those for the actual draggable area, relative to which I could constrain my indicator.

So first I built 4 guides that were pinned to the edges, had a width (resp. height) of 0.5 × indicatorView.widthAnchor (resp. heightAnchor) and a height/position (resp. width/position) same as the slider view.

Now we had the margins. Then I added a 5th guide that covered the draggable area inside those guides. Then took the old constraints and made them relative to this guide instead of the entire view.

That didn’t work. The height starts at 0, so if used as a position, it would always end up in the upper left. And if I added a constant the size of the margins, I’d have something that wouldn’t update when the view resized again. Might as well use pixels, then.

Drag area and indicator position layout guide (in blue)

Then it struck me: Why not just add another guide? The guide is pinned to the upper left of the draggable area, and its width/height are percentages of the draggable area’s height. I can now set the multiplier on the width/height constraints to my slider percentages, and the lower right corner of this 6th “indicator position” guide would be exactly where I want the indicator to be.

So I just change this guide’s multipliers when the indicator moves, and bind the indicator view’s center to the bottom and right anchors of the indicator position guide, and it all works!


You may note that I keep talking about changing the multiplier on constraints. Yeah, that’s not really possible, the only thing on a constraint that can change is the constant (well, and the identifier, but that would ruin the joke).

So yeah, wherever you read that, what I do is remove and recreate the constraint. Sadly, constraints do not have a -removeFromSuperview method, so what I really have to do is walk from a constraint’s firstItem and secondItem property up to their common ancestor and tell it to remove the constraint (if they are views … if they are guides, that means they’re constraints on self or one of its superviews).

How HyperCard got its color back…

HyperCard 2.4 New Features stack

Many people have heard of HyperCard: An offline precursor to the modern web, the Mac equivalent to Visual Basic, the tool the first Wiki and Myst were made with. Sometimes it is also called the first widely-used implementation of Hyperlinks (though it didn’t really get easy-to-use text link support until the 2.x series, and even that was more complicated than the links we are used to today).

Many people have also heard that HyperCard never got proper color support. But wait ! Myst was a full-color game! It even played movies! One of these has to be wrong!

As always, it’s a matter of nuances. Of platforms. And of ingenious hacks.

Black and White

The Crow HyperCard stack

When HyperCard came out, Bill Atkinson intentionally chose a crisp, B/W bitmapped window size of 512×342 pixels. It was the best they had at the time, and it meant that everyone, even people with small Mac Plus/Classic screens, would be able to view it. Being able to trade stacks with your friends was an important part of the social experience of HyperCard.

Atkinson invented a clever lossless image compression scheme, known as “Wrath of Bill” or “WOBA” among HyperCard reverse-engineerers, that reduced these B/W pictures to a practical size through a variety of tricks, including run-length-encoding and XORing a row with previous rows, which meant that even a checkerboard pattern compressed into one line of dots XORed repeatedly with a black row.

In 1990, this format was re-engineered slightly so a card could be an arbitrary size, as long as its width was still a multiple of 16.

Native color support

HyperCard IIGS TuneMaker

In 1991 a separate team at Apple was tasked to port HyperCard 1.x back to Apple IIGS home computers. As they had a blank slate to start from, they integrated support for native color. You were able to choose from the Apple IIGS’s 16 color palette by specifying color numbers, and they also added a few features that would later make it back into the original, like radio button groups.

Macintosh HyperCard, then at 2.0, still had no color support. Worse, the syntax introduced for the Apple II and the color file format were not really suitable for Macs, as the IIGS had a very limited color set and pixels that were taller than wide.

Colorizing HyperCard

ColorizeHC's color mode selector

After HyperCard had made its way into the hands of Apple-subsidiary Claris, engineers Eric Carlson and Anup Murarka came up with an ingenious hack to bring color to HyperCard without having to change the file format or even touch the engine code:

HyperCard used double-buffered display. That is, whenever it had to redraw part of the window, it first painted all the individual parts onto each other in a hidden “buffer”, and then copied that buffer to the window on screen. This meant you never got the “stacking” effect seen in so many other drawing programs.

This last copying step was achieved using a system call named CopyBits. If you replaced this system call with your own routine from a HyperCard plugin, you could check whether the destination to be copied to was the current document’s window, and if it was, mix other drawings in.

Since HyperCard used black outlines and white, they used a “darken” drawing mode to draw their color on top of the B/W picture. Black would stay black, as it is the darkest color, while white areas would “show through” the color overlay. Of course this third merging step wasn’t very fast on the hardware of the time, but it worked fairly well.

Colorizing HyperCard startup screen

When it was decided not to release this hack due to concerns about the support load it would generate, the two engineers quickly replaced the Claris logo with a “BungDabba Productions” logo and got permission to release it as a free third-party extension.

AddColor and InColor

AddColor New Features Stack

For a while, Heizer Software, a large distributor of popular HyperCard plugins, created its own color overlay plugin named InColor with features like color transition effects (this is what Myst was made with). Eventually, HyperCard proper followed suit. First shipped as a pre-release for use with 2.2, the HyperCard Color Tools stack was included in 2.3.

The included AddColor XCMD used the same approach as ColorizeHC, but instead of having to script all the pictures and drawings you wanted on a card, it wrote the list of items and their colors into ‘HCcd’ and ‘HCbg’ resources with the same ID numbers as your cards and backgrounds, and provided InColor-style transition effects.

The stack also implemented an editor interface that provided a color and tools palette that allowed to select buttons and fields by simply clicking them and then a color, and even drew its own “marching ants” selection on top of the selected items, in a fashion not dissimilar to how the editors in other HyperCard clones like SuperCard worked, although a separate XFCN named “clicker” had to be used to intercept mouse clicks on the card and draw the marching ants.

AddColor 2.0 PICT paint editor

You were able to set a “depth” for the edges of an object, and apart from buttons and fields could also add colored rectangles, PICT files or resources to a card’s color overlay. With AddColor 2.0 (included with HyperCard 2.4) you even got a PICT editor window with color paint tools.

Animation in HyperCard

HyperCard’s animation support had usually restricted itself to changing 32×32 pixel icons (and later arbitrarily-sized PICTs using the “icon ID -1 plus button name” trick), or flipping through cards. Given how slow the color overlay performed on most Macs of the time, these weren’t really an option for fluid animation in color.

So HyperCard 2.2 bundled ADDmotion II. Not unlike the Color Tools, this product from MotionWorks created its own editor on top of HyperCard, providing you with a Macromind-Director-style timeline interface and pixel graphic editor. The animations generated were completely separate from HyperCard. They were saved to the stack and then you could use an XCMD to play one inside the card window, covering the card, and then returning you to HyperCard again.

So No Released HyperCard for MacOS ever had Color?

HyperCard New Features Stack Button Tasks Page

Nope. Basically, every HyperCard version from 2.1 on added a few new commands here and there, but it was the same HyperCard 2.x.

An exception could be made for HyperCard 2.2, which added a few new button types (popup buttons, more native-looking “standard” and “default” buttons) and other visible features to the core engine, and support for other OSA scripting languages like AppleScript instead of HyperTalk. But color? Nope.

Screen shots courtesy of @HyperCard, used with permission.

Myopic version-control islands


Being a programmer, I use version control software a lot. A while ago, there was a great upsurge in such software. I suppose it started with Versions and Cornerstone, then continued with Git clients like Tower, Github and SourceTree.

Yet none of them really innovated on their command-line brethren. This may seem like an odd desire, but there are areas where GUI clients can improve on the command-line clients backing them.

Support the user’s workflow

In one talk at NSConference, Aral Balkan once said that “your UI shouldn’t look as if your database had just thrown up all over it”. This is what I’m reminded of when I look at SourceTree.

It feels like someone took a window and just threw in a pushbutton for every action, a text field for the commit message and a checkbox for every option. It presents me all of Git at once. It overwhelms not only me, but also my screen space, as it usually shows much more on the screen than I need at any single time, but since all of it has to be visible, it is all too small to be comfortably used.

All version control software needs to become more aware of context, of “what is it time for now”. Give the user a screen display that only shows things relevant to the current operation.

The File List

The file list is not just useful for when you want to commit a change. It can help with code navigation: I’m in a big project, I’ve edited a few files, I’ve viewed many more. I need to get back to that spot I started my change in after implementing some needed subroutines and their tests. The recents list in Xcode won’t help me there, too many files I came past on my search for the right spot, some in the main tab, some in multi-file search. But my VCS knows which files I just touched.

I just go into the VCS GUI client, to the list of changed files, and there are the 5 out of 50 files I actually changed. And now that I see these 5 filenames, I can recognize what the colleague named that file. I’ve quickly found it.

Why don’t more VCS GUIs support code navigation? Let me search. Let me select. Heck, if you wanted to get really fancy you could show me the groups in the Xcode project that my files belong to. Analyze, correlate.

Peripheral Vision

The one thing all GUIs for version control systems provide these days is what I’d call “peripheral vision”: They show a constant list of files in your repository and show which ones have changed, live.

You don’t have to actively call git status. Whenever a file changes, it shows up.

By having these updates show up on their own accord, I can be warned of external influences automatically. SmartSVN, for example, shows both the local and remote state of a file. So if a colleague modifies the Xcode project file on the server that I’m currently editing locally, I immediately see in my peripheral vision that I have a pending conflict.

Each Version Control System an Island

Most of the version control GUIs I’ve mentioned ignore one important fact of most peoples’ work with version control: Sure, it is useful for single developers as unlimited undo, but most of the time it is used in collaborative environments.

If I’m collaborating with someone, isn’t the most important thing here to keep me abreast of what other developers are doing? Why do all the GUIs except SmartSVN with its horrible non-native Java grab-bag UI focus so much on making me see my working copy that is right here in front of me, and then come up surprised when something on the server changes and drop me into an external diff client without any hand-holding?

Apart from showing remote status, why don’t they keep me informed of incoming changes? Why does Cornerstone only let me view the log history of individual files or folders, but doesn’t constantly keep the list of commits in my peripheral vision? Why does no client offer to show me a notification whenever a new push happens on the server?

They just don’t Learn from History

The commit history also seems to be an afterthought to most VCS GUI developers. The only human-curated part of the entire commit metadata is usually hidden on separate tabs, or at best fighting for space with the file list and lots of other UI. File names are short. Commit messages are long. Why should those two lists be forced to be the same width?

In Versions, the commit list can only be read. I can see the changes in it and the message, but can’t select a commit in the list to roll back to that commit, or branch off from it. This is one of the basic tenets of UI design: Don’t have the user type in something the program already knows. The commit hash is right there in front of me on the screen, why do I have to type it in to check out?

Moreover, the list of commits in Versions is not scannable. There are barely noticeable color differences in the date, name and commit message, and they’re too close together and separated by lines.

Ever wonder why Finder uses alternating background colors to distinguish table rows? Because it’s easier to scan: Lines are read by the mind as glyphs, additional information to be processed, whereas the “line” where two different-colored surfaces meet are just accepted as a gap between things.

That’s why so many lists use columns. That way, if you’re looking for a commit from a particular colleague, you just scan down that column, able to completely ignore the commit messages.

The User doesn’t make Mistakes

Users don’t make mistakes. Bad GUI just leads them down the wrong path. When a user makes a mistake, be forgiving.

A contradiction? Yes. While most VCSes already under the hood have the policy of never losing data, GUIs can improve on that. Undo on text fields. Showing a big warning banner across the window when the user is on a detached head, which the user can see even if the window is half-hidden behind Xcode. Offering to stash changes for the user if they’re switching branches and have unsaved changes.

If the user selects three “unknown” (aka new) files and asks you to commit them, don’t just abort with Git’s standard error saying that they aren’t under version control! Try to anticipate what the user wanted. Show a window with a list of the offending files and offer to automatically stage them (with checkboxes next to them to turn off ones they might not have wanted to commit).

If a user tries to commit a binary file that has its executable bit set, maybe ask for confirmation in case they’re accidentally checking in the build products, and offer to add the file or one of its enclosing folders to the .gitignore file.

If the user tries to amend a commit, be smart and warn them from changing history that’s already been pushed. But don’t warn them needlessly. Can you check if any remote is ahead of this commit to detect whether the user has already pushed the commit to be rewritten? If not, it’s safe, just let them do it.

Remote Possibility of Supporting a Workflow

I’ve mentioned how we need to try to support the user’s workflow more and how the server is under-served. This also applies to setup. One of SourceTree’s standout features is that it lets you not only enter your Github or Bitbucket URL, but also shows you lists of your remote repositories.

You can set a default folder where your programming stuff goes, and then just select one of your remote repositories and click “clone”, and poof, it checks it out, adds a bookmark for it, and opens it in a window and you’re good to go. Heck, Git Tower even lets you specify the address of an image file in your repository to represent it in the list for quicker scanning.

Why has no VCS GUI added a Coda-style project list and automatically looks for project files and their application icons in a checkout to pre-populate the icon?

Re-open the repositories (yes, users may want to open several at once, deal with it!) the user had open when your app was quit. And for heaven’s sake, why are there VCS developers who don’t know how to make their application accept a folder via drag & drop on its application icon in Finder or the dock so I can quickly open a working copy that’s right there in front of me without having to wait for an open panel to open up?

Promise to be Better

I’m sorry, this has turned into a rant there. But the fact is, there are so many VCS applications, yet most simply expose the commands of their command line equivalents. Why do so few protect me from commonly made mistakes and focus on what me and my colleagues want to achieve instead and support us in that?

How can products connected to servers be so asocial?

Raw graphics output in Linux: Part 2


In Part 1 of this series, we’ve set up a command-line Linux in the VirtualBox emulator with support for direct frame buffer access, the git version control system and the clang compiler. Now let’s use this to draw graphics to the screen “by hand”.

Getting the code

The code we’ll be using is on my Github. So check it out, e.g. by doing:

mkdir ~/Programming
cd ~/Programming
git clone 'https://github.com/uliwitness/winner.git'

Now you’ll have a ‘winner’ folder in a ‘Programming’ folder inside your home folder. Let’s build and run the code:

cd winner
sudo ./winner

Screen Shot 2015-10-03 at 16.48.13

This code just drew a few shapes on the screen and then immediately quit. The Terminal was rather surprised about that, so just prints its last line on top of that.

How to access the screen

It took me a bit of googling, but eventually I found out that, to draw on the screen in Linux, you use the framebuffer. As most things in Linux, the frame buffer is a pseudo-file that you can just open and write to. This pseudo-file resides at /dev/fb0, and is the whole reason for the extra hoops we jumped through in Part 1 because a minimal Ubuntu doesn’t have this file.

So if you look at the file linux/framebuffer.hpp in our winner subversion repository, it simply opens that file and maps it into memory, using the ioctl() function and some selector constants defined in the system header linux/fb.h to find out how large our screen is and how the pixels are laid out.

This is necessary, as at this low level, a screen is simply a long chain of bytes. Third row chained after second row after first row. Each row consists of pixels, which consist of R, G, B and optionally alpha components.

By mapping it into memory, we can use the screen just like any other block of memory and don’t have to resort to seek() and write() to change pixels on the screen.


Since computers are sometimes faster when memory is aligned on certain multiples of numbers, and you also sometimes want to provide a frame buffer that is a subset of a bigger one (e.g. if a windowed operating system wanted to launch a framebuffer-based application and just trick it into thinking that the rectangle occupied by its window was the screen), the frame buffer includes a line length, x-offset and y-offset.

X and Y offset effectively shift all coordinates, so define the upper left corner of your screen inside the larger buffer. They’re usually 0 for our use case.

The line length is the number of bytes in one row of pixels, which may be larger than the number of pixels * number of bytes in one pixel, because it may include additional, unused “filler” bytes that the computer needs to more quickly access the memory (some computers access memory faster if it is e.g. on an even-numbered address).

Actually drawing into the frame buffer

The actual drawing code is in our image class, which doesn’t know about frame buffers. It just knows about a huge block of memory containing pixels, and its layout.

The main method in this class is set_pixel() which calculates a pointer to the first byte of a pixel at a given coordinate, and then, depending on the bit depth of the pixels in the bitmap, composes a 2-byte (16 bit) or 4-byte (32 bit) color value by filing out the given bits of our buffer.

All other drawing methods depend on this one:

Drawing rectangles

If you look at fill_rect, it simply takes a starting point (upper left corner of the rectangle) and then fills rows of pixels with that color.

To draw a frame around a rectangle is almost the same. We simply fill as many top and bottom rows as our line width dictates, and the rows in between get filled with a pixel (or whatever our line width is) at the left and right of our rectangle.

Drawing lines

Drawing one-pixel lines involves a tad of basic maths, but it’s nothing that you couldn’t get from a quick glance at Wikipedia. You take the line equation called the “point-slope-form”.

Then you calculate the line’s slope based on your start and end point. If the line is more horizontal than vertical, you loop over the X coordinate from start to end and use that and the slope to calculate the corresponding Y. If it is more vertical than horizontal, you loop over the Y coordinate to get the X instead.

Now, if you use this naïve approach, you may get small gaps in the line, because lines work with fractional numbers, while our computer screen only has full, integer pixels. This is why this example uses a variation on the same process that was invented by someone named “Bresenham”, which keeps track of the loss of precision and adds pixels in as needed.

Now drawing a line of more than one pixel width is a little harder. You see, lines are really infinitely thin, and don’t have a width. When you draw a line of a certain width, what computers usually do is either draw a rotated rectangle that is centered over the line and is as long as it is, and as wide as your line width, or it simply rubber-stamps a filled square or circle of the line width centered over each point on the line, which gives a similar look.

I essentially go with the latter approach in this example, but since I plan to eventually support different opacity for pixels, I do not want to draw whole boxes each time, because they would overlap and a 10% opaque line would end up 20% opaque in every spot where they overlap. So I just detect whether a line is mainly horizontal or vertical, then draw a horizontal or vertical 1 pixel line of the line width through each point.

This isn’t quite perfect and gives diagonal lines a slanted edge, and makes them a bit too wide, so I eventually plan to at least change the code so the small lines are drawn at a 90° angle to the actual line you’re drawing. But that’s not done yet.

Drawing circles

Again, I just get the equation for circles off Wikipedia. It says that r2 = (x-centerX)2+(y-centerY)2. Where “r” is the radius of the circle you want to draw, x and y are the coordinates of any point which you want to test whether it is on the circle, and centerX and centerY are the center of the circle.

Once you know that, you can draw a circle like you draw a rectangle. You calculate the enclosing rectangle of our circle (by subtracting/adding the radius from/to the center point) and then, instead of just drawing the rectangle, you insert each point into the circle equation. If the right-hand-side equates to r2 or less, the point is in the circle, and you can draw it, otherwise you skip this point.

Drawing the outline of a circle is just a specialized version of filling it here. Instead of checking whether the equation comes up as < r2, you also check whether it is greater than (r -lineWidth)2. So essentially you’re checking whether a point lies between two circles, the inner edge of your outline, and the outer edge of it.

This is probably not the optimal way to draw a circle, but it looks decent and is easy enough to understand. There are many tricks. For example, you could calculate only the upper right quarter of the circle, then flip the coordinate horizontally and vertically around the center and thus draw 4 points with every calculation. Bresenham even came with an algorithm where you only calculate 1/8th of a circle’s pixels.


The library doesn’t do ovals yet, but I think they could be implemented by using the circle equation and multiplying the coordinate of the longer side of the surrounding rectangle by the ratio between width and height. That way, your coordinates are “projected onto a square”, in which you can use the circle equation.

There are probably more efficient ways to do this.

Drawing bitmaps and text

To draw a bitmap (or rather, a pixel map) is basically a special case of rect drawing again. You take a buffer that already contains the raw pixels (like letterA in our example main.cpp). For simplicity, the code currently assumes that all images that you want to draw to the screen use 32-bit pixels. That also allows us to have a transparency value in the last 8 bits.

It simply draws a rectangle that is the size of the image, but instead of calling set_pixel() with a fixed color, it reads the color from the corresponding pixel in the pixel buffer we are supposed to draw. It also only draws pixels that are 100% opaque.

Text drawing is now simply a special case of this. You create a bitmap for every letter, then when asked to draw a certain character, load the corresponding bitmap and draw that. Of course, serious text processing would be more complex than that, but that is the foundational process as far as a drawing engine is concerned.

You’d of course need a text layout engine on top of that to handle wrapping, and other code to e.g. combine decomposed characters. Also, if you wanted to support the full Unicode character set (or even just all Chinese glyphs), you’d probably want to make your look-up happen in a way that you don’t need to load all bitmaps immediately, but can rather lazy-load them as they are used.


When we later implement our own window manager, we will need to be able to have windows overlap. To do that, we need to be able to designate areas as “covered” and have set_pixel() just not draw when asked to draw into those.

This is not yet implemented. The general approach is to have a bitmap (i.e. a pixel buffer whose pixels only occupy 1 bit, on or off) of the same size as our pixel buffer that indicates which pixels may be drawn into (usually that’s called a “mask”).

There are of course various optimizations you can apply to this. The original Macintosh’s QuickDraw engine used a compressed form of a bitmap called a “Region”, which simply contained entries for pixels in each line indicating the length of each color. I.e. “5 pixels off, 10 pixels full”. Some graphics engines simply only allow to clip to rectangles (which can be described by 4 coordinates). If all your windows are rectangular, that is sufficient.

The only clipping the image class currently implements is that circles that fall off any of the edges get clipped, and that rectangles and bitmaps that fall off the bottom or right edges get clipped. The way rectangles are currently specified, it is impossible to have them fall off the left or top, as that would require negative coordinates.

If you currently try to draw outside the image’s defined area using set_pixel(), you will corrupt memory. For a shipping drawing system you’d want to avoid this, and we’ll get to this once we implement a higher-level drawing system on top of this one that deals with clipping, coordinate systems and transformations.

Raw graphics output on Linux: Part 1


In my quest to understand better how my computer works, I decided I want to write a very minimal window server. The first step in that is to create something that performs raw graphics output to the screen, directly to its back buffer.

So, as a test bed, I decided to grab the VirtualBox emulator and install Ubuntu Minimal on it. Ubuntu Minimal is a (comparatively) small Linux that is still easy to install, and will provide the graphics drivers we’ll be talking to, and a file system and a loader to load the code to run.

If you just want to know how drawing itself works, feel free to skip to Part 2 in this blog series.

Setting up the virtual machine

Setting up a VM is fairly self-explanatory with the setup assistant in VirtualBox. It has presets for Linux and even for various Ubuntus, and most of the time the defaults are fine for us:

Screen Shot 2015-10-03 at 01.15.15

Screen Shot 2015-10-03 at 01.15.44

Screen Shot 2015-10-03 at 01.15.51

Screen Shot 2015-10-03 at 01.16.06

Screen Shot 2015-10-03 at 01.16.19

I’m choosing to name the VM “Winner”, short for window server, but you can choose whatever name you like:

Screen Shot 2015-10-03 at 01.16.34

Now you have a nice emulated empty computer

Screen Shot 2015-10-03 at 01.16.50

Now, we need to tell it to pretend that the mini.iso Linux disk image file we downloaded from Ubuntu was a CD inserted in its optical drive by selecting the “Empty” entry under the CD, then clicking the little disc icon next to the popup on the right to select a file:

Screen Shot 2015-10-03 at 01.17.14

Note that you would have to use the “Choose Virtual Optical Disk File…” item, I have the mini.iso entry in here already because I previously selected the file.

Screen Shot 2015-10-03 at 01.17.28

Screen Shot 2015-10-03 at 01.17.40

Now you can close the window using the “OK” button and click the green “Start” arrow toolbar icon to boot the emulated computer.

Installing Ubuntu Minimal

Screen Shot 2015-10-03 at 01.18.35

Ubuntu will boot up. Choose “Command-Line install” and use the arrow and return keys to navigate through the set-up. Pick your language, country and keyboard layout (if you’re on a Mac, choose to tell it instead of having it detect, and pick the “Macintosh” variant they offer):

Screen Shot 2015-10-03 at 01.18.49

It will then churn a bit:

Screen Shot 2015-10-03 at 01.21.03

And then it will ask you to name your computer:

Screen Shot 2015-10-03 at 01.21.24

You can pick pretty much any name for your emulated home computer, it doesn’t really matter for what we are doing. I picked “winner”.

Then it will ask you to choose the country you are currently in, so it can pick the closest server for downloading additional components:

Screen Shot 2015-10-03 at 01.21.35

And if they have several servers in your country, they’ll offer a choice. Just pick whatever it offers you, it’ll be fine.

Screen Shot 2015-10-03 at 01.21.58

Then it will ask you if you need to use a proxy. Unless you’re in a weird restrictive company or university network or trying to get around an oppressive government’s firewall, you can just leave the field empty and press return here to indicate no proxy is needed:

Screen Shot 2015-10-03 at 01.22.18

Then it will churn some more, downloading stuff off the internet etc.:

Screen Shot 2015-10-03 at 01.22.42

Now it’s time to set up your user account, password (twice) etc.:

Screen Shot 2015-10-03 at 01.23.39

Screen Shot 2015-10-03 at 01.23.45

In this emulator, we don’t need an encrypted hard disk (If you need it, your computer’s hard disk is probably already encrypted, and your emulated computer’s files are all stored on that anyway).

Screen Shot 2015-10-03 at 01.24.40

Then it will ask you about some system clock settings (the defaults should all be fine here:

Screen Shot 2015-10-03 at 01.25.06

Then it will ask how to partition and format the hard disk. You’re not dual-booting anything, the emulated computer is for Linux only, so just let it use the entire disk:

Screen Shot 2015-10-03 at 01.25.31

And don’t worry about selecting the wrong disk, it will only offer the emulated hard disk we created. Tell it to create whatever partitions it thinks are right:

Screen Shot 2015-10-03 at 01.26.02

And it will churn and download some more:

Screen Shot 2015-10-03 at 01.26.11

Since we may want to keep using this for a while, let’s play it safe and tell it to apply any important updates automatically:

Screen Shot 2015-10-03 at 01.36.03

And when it asks if it is OK to install the boot loader in the MBR, just say yes:

Screen Shot 2015-10-03 at 01.38.22

Again, there is no other operating system inside this emulation, they’re just being overly cautious because so many linux users have weird setups.

For the same reason, you can just let it run the emulator with a UTC system clock as it suggests:

Screen Shot 2015-10-03 at 01.38.38

That’s pretty much all. Tell it to restart, and quickly eject the CD disk image by un-checking it from your “Devices” menu:

Screen Shot 2015-10-03 at 01.38.39

Setting up Ubuntu

Ubuntu is pretty much ready to go. You’ll have a neat command line OS. However, for our purposes, we want to have graphics card drivers. Since this is the minimal Ubuntu, a lot is turned off, so let’s turn that back on again and install some missing parts that we want for our experiments. Log in with your username and password and edit the configuration file /etc/default/grub which tells the bootloader what to do:

Screen Shot 2015-10-03 at 12.22.58

If you’re unfamiliar with the Unix Terminal, just type sudo nano /etc/default/grub and enter your password once it asks. sudo means pretend you’re the computer’s administrator (as we’re changing basic system settings, that’s why it wants your password). nano is a small but fairly easy to use text editor. It shows you all the commands you can use at the bottom in little white boxes, with the keyboard shortcuts used to trigger them right in them (“^” stands for the control key there):

Screen Shot 2015-10-03 at 12.23.33

Most of the lines in this file are deactivated (commented out) using the “#” character. Remove the one in front of GRUB_GFXMODE to tell it we want it to use a graphical display of that size, not the usual text mode that we’re currently using.

Save and close the file (WriteOut and Exit, i.e. Ctrl+O, Ctrl+X in nano).

Now usually this would be enough, but Ubuntu Minimal is missing a few components. So now type sudo apt-get install v86d. This tells Ubuntu to install the v86d package that does … something. If you left out this step, you would get an error message telling you that v86d doesn’t work on the next step. Confirm that you want to install these whopping 370kb of code by pressing “y” when asked. It will churn a bit.

Type in sudo modprobe uvesafb. The graphics drivers on Linux all implement the so-called “framebuffer” commands. That’s what “fb” here stands for. VirtualBox emulates a “VESA” display, and “uvesafb” is the modern version of the “vesafb” graphics driver you’d want for that. So we’re telling our Kernel to load that module now.

If all works, all that you should see is that your screen resizes to 640×480, i.e. becomes more square-ish:

Screen Shot 2015-10-03 at 12.25.54

Now we don’t want to manually have to activate the frame buffer every time, so let’s add it to the list of modules the Kernel loads automatically at startup. Type sudo nano /etc/initramfs-tools/modules to edit the module list and add “uvesafb” to the end of the list (in my case, that list is empty):

Screen Shot 2015-10-03 at 14.51.45

The professionals also suggest that you check the file /etc/modprobe.d/blacklist-framebuffer.conf to make sure it doesn’t list “uvesafb” as one of the modules not to load. If it does, just put a “#” in front of it to deactivate it.

Screen Shot 2015-10-03 at 12.51.22

Now run sudo update-initramfs -u which tells the system to re-generate some of the startup files that are affected by us adding a new module to the list. It will churn for a moment.

Now we need a nice compiler to compile our code with. There’s probably a copy of GCC already on here, but just for kicks, let’s use clang instead, which gives nicer error messages. Enter sudo apt-get install clang:

Screen Shot 2015-10-03 at 12.26.34

Finally, we need a way to get our source code on this machine, so let’s install the git version control system:

sudo apt-get install git

OK, now pretty much everything we need is set up. Part 2 in this series will get us to actually running some code against this graphics card driver.

You can shut down your virtual Linux box until you’re ready to try Part 2 by typing sudo poweroff.

MacBook Holster

Closed MacBook holster with iPad, MacBook, power supply, adapter & 3G dongle inside.

Closed MacBook holster with iPad, MacBook, power supply, adapter & 3G dongle inside.

A few years ago at NSConference, I bought an Incase MacBook Air 11" sleeve off another attendee. I’d been unsuccessfully looking for a good backpack to hold my MacBook Air and not add much bulk. So I had to make one.

This is not a very complicated task that needs much description. All I did was buy a roll of backpack strap, a quick release buckle (so I don’t have to wind out of it when it’s inconvenient), a matching tri-glide slider, and a shoulder pad.

Yes, these things really all fit into this MacBook sleeve.

Yes, these things really all fit into this MacBook sleeve.

I took a needle and thread and manually sewed the strap to the upper left and lower right corners, making sure the ends were at an angle so that they’d come off the backpack straight. I had to do several rows of sewing to make sure it was attached strongly enough to hold the weight of not only the MacBook, but also the power brick.

Now while I wish I’d had a sewing machine at the time to make the stitch nicer, you don’t usually see that side of the backpack, so at least there’s that consolation. Once I’d looped the strap through the pad and slider and sewed the end that went around the slider’s middle bar onto itself (to get me adjustable strap length), it turned out to be strong enough for not just the MacBook and power supply, but also an iPad, plug adapters and a 3G dongle.

The sewing would have been prettier with a machine.

The sewing would have been prettier with a machine.

I wonder why no backpack manufacturer makes small holsters like this. I’ll probably add a zipper to the outside pocket of the sleeve, but apart from that it’s a very useful bag now.

NB – The photos are of a MacBook, but I used to keep a MacBook Air 11" in there and can assure you that it fits just fine, though the bulkier power supply requires a bit of thrusting to get it in the outside pocket.

How I prepare for talks


Among Jaimee Newberry’s fun daily video diary entries is an especially useful one on how she prepares for giving talks. Graham Lee offered his take on preparing for giving talks.

That got me thinking. What do I do? I’m more a Jaimee-Talker, but I don’t really do a single idea in a talk that often. I have a note in the Notes app with all my talk ideas where, every time I encounter a problem or question, and every time I e.g. find myself answering a question on Stack Overflow or at work, I make a note of it.

I try to group them by topic, and that usually quite naturally turns into a way too long talk outline. Then when it’s time to give a talk, I pick the choice bits out of one or more of these outlines, and make that my outline, sometimes changing the focus. E.g. once I had the mandate to add a missing beginners’ talk on Quartz to a conference, so I took the most basic, most practical graphics issues from my notes, prefixed them with a general tutorial on how the Quartz framework is organized and that was my initial outline.

Then I built a first rough slide deck based on those notes and just started holding the talk using Keynote, with the audio recording function on, in the privacy of my own home. Sometimes, once the talk has advanced a bit, I even (similar to Jaimee) set up my iPhone or iPad to record myself.

So, how does a talk “advance”? Well, it’s simple. First and foremost, I make notes for every slide about the things I’ve said. Also, at some points giving a talk, I will get stuck. Or I will repeat something I’ve already said earlier. Or I’ll explain something in words that would really need an illustration. At that point I drop into Keynote and either re-arrange the slides, or do a first rough illustration.

Once I’ve done this a few times, the talk will feel much more fluid, but will be running horribly long. So I try to do a full run-through without interruptions and time it. Once I have my time, I’ll try to find things I can cut and mercilessly cut them. Things that feel like a detour, or boring, or too trivial. Things that the intended target audience would know already. But sometimes I also realize that I haven’t explained something that needs explaining and add a slide.

Then I do the talk again. Rinse and repeat, until the timing and flow is right. When the slides have stopped moving and disappearing, and I’m happy with what’s in the talk, I’ll start refining the illustrations. Adding builds that reflect my description. Usually that on one hand forces me to go through these slides at “the speed of build”, but it also shortens my descriptions very much, so often it evens out.

If it doesn’t, I might have to cut some more slides, or find a way to simplify what is there to make it go faster.

As you can tell, this is an approach best suited to more technical talks. More “philosophical” talks sometimes can be analyzed enough that this approach works. Other times, they’re more like stories, making them harder to re-arrange and to cut out stuff. I generally still use the same approach, but it doesn’t work as reliably. What can I say, it’s a work in progress, and I’ll work at sucking less at the not-a-story-not-technical-either kind of talks as I can.

I haven’t mentioned the title yet. Usually, it comes at the end. I have a working title (e.g. Memory Management Fundamentals), and then look at what is actually in my talk and pick a better name (e.g. On graph paper and memory).

Sometimes, I need to provide a title when I sign up as a speaker. As I already have the notes, I’m usually pretty good at picking a title that works. Sometimes they let me change it afterwards. Sometimes they don’t, and I go for that title with a byline that narrows it down to what the new title would be. (Don’t put a different title on your first slide than is announced in the programme, attendees won’t find you)

One thing I sometimes do in the end is I record myself doing the final talk (with the iPhone camera or whatever) and watch myself doing it, watching out for how I look. Am I scratching my nose? Do I say “umm” a lot? Then I try to remember to turn that down.

How to install Windows 8.1 on a Mac


Update: Microsoft now sells Windows 10 on USB keys, so even Mac users can install it easily if they order physical media. Also, I’ve heard (but have been unable to confirm) that Microsoft’s download version of Windows now can be obtained as an .ISO disk image again, which the Mac’s Disk Utility can extract onto a USB key.
I’m leaving this article up for people who somehow still end up with the .exe installer and are looking for a workaround.

It is not quite trivial to buy Windows as a download and get it onto your Mac. I’ve found a workaround, but it takes a lot of time, and requires you to download about 7GB of data via the internet.

Disclaimer: I do not guarantee that these steps will work. They worked for me in late June 2015, YMMV. Do not blame me if you buy a download version of Windows and then can’t install it. Also, be sure to make a backup of your entire hard disk/SSD before you do this. You will be resizing partitions and doing other things that could lead to accidental loss of data.

The Problem:

  • The microsoftstore.com Windows 8 download is a small .exe file containing a downloader application that needs an already-installed Windows to work.
  • Macs these days don’t have a DVD drive, so you’d need to buy/borrow one to be able to use install DVDs mailed to you.
  • Boot Camp Assistant assumes a physical DVD or an ISO disk image, it obviously can’t run the .exe under MacOS.
  • I was unable to get the .exe downloader to run under CrossOver on MacOS.

My workaround:

  • Download the trial of Windows 8.1 for Enterprise as an ISO image from Microsoft (need to create an MS account which you will also later need to buy the download)
  • Use Boot Camp Assistant to install that onto an empty USB stick that is at least 4GB (not just the Apple-specific drivers, check the option for the full install partition). The stick will be formatted using Windows’ old FAT32 format, which both Mac and Windows can read and write.
  • ~100GB (at least 60) is a good size for the Windows partition to add to your internal hard disk/SSD.
  • Boot Camp will now churn a while and copy the files from the ISO on your USB stick, and will also download the newest hardware drivers from Apple and make sure those get installed as well. Time for breakfast.
  • When Boot Camp Assistant reboots, hold down the option key and select the “EFI Boot” entry to make sure you don’t end up back in MacOS.
  • You will find yourself in the standard Windows installer now. Follow its directions. On Retina Macs, it will be at a tiny 1:1 resolution. Bring a magnifying glass.
  • When asked where to install the Boot Camp partition, find the one named “BOOTCAMP” and select it. Remember what else it says (e.g. “Disk 1 Partition 4”).
  • If the Windows installer complains about the partition not being formatted as NTFS, Click the “Format” button underneath the list, but don’t do any repartitioning with the Windows tools, you’d only disturb the fairy dust that Boot Camp Assistant has applied and break booting back into MacOS.
  • Select the reformatted disk (which has now lost its “Bootcamp” name) and click “Next” to start installing the trial.
  • Make lunch while pretty colorful screens rotate through and Windows is set up for you in the background.
  • Run through the Boot Camp installer that runs in Windows after the standard Windows installer has finished.
  • Once you have a working trial install of Windows, buy the download .exe from microsoftstore.com, if you haven’t already. Unless they say they don’t, installers include both old-style 32-bit versions and the 64-bit versions needed for Macs, don’t worry.
  • Run the .exe you just bought while you’re running the Enterprise Windows Trial to create a proper ISO with your purchased Windows 8.1 on it.
  • Back up that Windows.iso and its license key somewhere safe.
  • Copy the Windows.iso onto the USB stick so you can get at it from MacOS.
  • Note down the Windows license key somewhere, you’ll need to type it in in a moment.
  • Boot back into MacOS and run Boot Camp Assistant a second time to remove the trial partition. (BCA doesn’t let you run it again on an existing partition, so you’ll have to nuke and recreate)
  • Run Boot Camp Assistant a 3rd time, this time using the new ISO, not the trial, to get the desired full Windows install. Remember to hold down the Alt key at startup to select “EFI Boot” or you’ll just end up back in MacOS.
  • When the standard Windows installer comes up, you’ll need to enter your Windows license key this time. From then on, the install will be identical to the trial install.
  • Your Yak is shaven clean as a baby’s bum.

Note: In theory, it should be possible to run the .exe under the trial to directly install Windows 8.1 on top of the trial instead of generating the ISO, but I didn’t want to risk it somehow generating a mix of the trial and purchased Windows installs, or eliminating the Boot Camp-supplied drivers & programs, so I decided to nuke the trial once I had the ISO and start fresh. Whatever you do, generate and back up the ISO so you don’t need to request another trial from MS when you inevitably want to reinstall Windows at a later time, even if you then use the .exe and not Boot Camp for the second installation.

Thanks:Thanks to Sören for pointing me at the Windows trial version that made this possible.

Microsoft supports UIKit


This week’s Build conference held a big surprise: Microsoft announced that they’ve built a UIKit compatibility layer for their various flavours of Windows.

Now I’m mainly a Mac developer and only hear of Windows things from friends and colleagues at the moment (the last time I did Windows work was around Windows XP), but my impression so far was that MS was frantically searching for a new API.

I don’t remember all occurrences, but I remember them announcing Silverlight, and .NET with WPF, and Windows RT that only supported the new APIs, and all sorts of things to then cancel them again.

So my impression as an outsider is that new APIs weren’t trustworthy and MS would always fall back to supporting their old API main-line that they carry around for compatibility reasons anyway.

Announcing UIKit and Android support actually makes a lot of sense in that context:

Although it appears to acknowledge that Windows Phone really didn’t take off, it does solve the catch-22 that MS found themselves in: Lack of apps. In an ideal case, they’ll now get all iOS apps Apple sells, plus the ones Apple rejected for silly reasons, plus those Android apps that iOS users long for.

If this gambit pays off, MS could leap-frog Apple *and* Android.

It also increases trust among developers who are sticking to ancient API: iOS and Android are the only modern APIs that Microsoft could implement that developers would confidently develop against after all these false starts, because even if MS dropped support for them, they’d still have the entire iOS/Android ecosystem to deploy against. So coding against UIKit for Windows Phone is a reasonably safe investment.


Of course, the elephant in the room here is Apple’s recent move to Swift. Now, given that Apple’s frameworks still all seem to be Objective-C internally (even WatchKit), I don’t think MS have missed the train. They might even pick up some Swift critics that are jumping Apple’s ship by supporting Objective-C.

But Swift damages the long-term beauty of MS’s “just call native Windows API from Objective-C” story. They will have to bridge their API to Swift (like Apple does with some of their C-based API right now), instead of getting people to use more and more classic Windows API in their Cocoa apps until the code won’t run on iOS anymore.

Still, that’s a small aesthetic niggle. MS already have a code-generator back-end that they can plug any parser onto, and Swift doesn’t appear to be a particularly difficult language to parse. In any event, parsers are easier than good code generation. For MS to create a Swift compiler is a solved problem, and I’d be surprised if they weren’t already working on it.

Of course, if MS had known about Swift when they started their UIKit for Windows, would they still have written it in Objective-C? Or would they have just written it in Swift with a bridging header?

So given the situation MS have managed to get themselves into, this sounds like it might be a viable solution to survive and, maybe, even come back from again. Still, it is an acknowledgement of how MS has fallen, that they need to implement a competitor’s API on their platform.