Myopic version-control islands

Being a programmer, I use version control software a lot. A while ago, there was a great upsurge in such software. I suppose it started with Versions and Cornerstone, then continued with Git clients like Tower, Github and SourceTree.

Yet none of them really innovated on their command-line brethren. This may seem like an odd desire, but there are areas where GUI clients can improve on the command-line clients backing them.

Support the user’s workflow

In one talk at NSConference, Aral Balkan once said that “your UI shouldn’t look as if your database had just thrown up all over it”. This is what I’m reminded of when I look at SourceTree.

It feels like someone took a window and just threw in a pushbutton for every action, a text field for the commit message and a checkbox for every option. It presents me all of Git at once. It overwhelms not only me, but also my screen space, as it usually shows much more on the screen than I need at any single time, but since all of it has to be visible, it is all too small to be comfortably used.

All version control software needs to become more aware of context, of “what is it time for now”. Give the user a screen display that only shows things relevant to the current operation.

The File List

The file list is not just useful for when you want to commit a change. It can help with code navigation: I’m in a big project, I’ve edited a few of files, I’ve viewed many more. I need to get back to that spot I started my change in after implementing some needed subroutines and their tests. The recents list in Xcode won’t help me there, too many files I came past on my search for the right spot. But my VCS knows which files I just touched.

I just go into the VCS GUI client, to the list of changed files, and there are the 5 out of 50 files I actually changed. And now that I see these 5 filenames, I can recognize what the colleague named that file. I’ve quickly found it.

Why don’t more VCS GUIs support code navigation? Let me search. Let me select. Heck, if you wanted to get really fancy you could show me the groups in the Xcode project that my files belong to. Analyze, correlate.

Peripheral Vision

The one thing all GUIs for version control systems provide these days is what I’d call “peripheral vision”: They show a constant list of files in your repository and show which ones have changed, live.

You don’t have to actively call git status. Whenever a file changes, it shows up.

By having these updates show up on their own accord, I can be warned of external influences automatically. SmartSVN, for example, shows both the local and remote state of a file. So if a colleague modifies the Xcode project file on the server that I’m currently editing locally, I immediately see in my peripheral vision that I have a pending conflict.

Each Version Control System an Island

Most of the version control GUIs I’ve mentioned ignore one important fact of most peoples’ work with version control: Sure, it is useful for single developers as unlimited undo, but most of the time it is used in collaborative environments.

If I’m collaborating with someone, isn’t the most important thing here to keep me abreast of what other developers are doing? Why do all the GUIs except SmartSVN with its horrible non-native Java grab-bag focus so much on making me see my working copy that is right here in front of me, and then come up surprised when something on the server changes and drop me into an external diff client without any hand-holding?

Apart from showing remote status, why don’t they keep me informed of incoming changes? Why does Cornerstone only let me view the log history of individual files or folders, but doesn’t constantly keep the list of commits in my peripheral vision? Why does no client offer to show me a notification whenever a new push happens on the server?

They just don’t Learn from History

The commit history also seems to be an afterthought to most VCS GUI developers. The only human-curated part of the entire commit metadata is usually hidden on separate tabs, or at best fighting for space with the file list and lots of other UI. File names are short. Commit messages are long. Why should those two lists be forced to be the same width?

In Versions, the commit list can only be read. I can see the changes in it and the message, but can’t select a commit in the list to roll back to that commit, or branch off from it. This is one of the basic tenets of UI design: Don’t have the user type in something the program already knows. The commit hash is right there in front of me on the screen, why do I have to type it in to check out?

Moreover, the list of commits in Versions is not scannable. There are barely noticeable color differences in the date, name and commit message, and they’re too close together and separated by lines.

Ever wonder why Finder uses alternating background colors to distinguish table rows? Because it’s easier to scan: Lines are read by the mind as glyphs, additional information to be processed, whereas the “line” where two different-colored surfaces meet are just accepted as a gap between things.

That’s why so many lists use columns. That way, if you’re looking for a commit from a particular colleague, you just scan down that column, able to completely ignore the commit messages.

The User doesn’t make Mistakes

Users don’t make mistakes. Bad GUI just leads them down the wrong path. When a user makes a mistake, be forgiving.

A contradiction? Yes. While most VCSes already under the hood have the policy of never losing data, GUIs can improve on that. Undo on text fields. Showing a big warning banner across the window when the user is on a detached head, which the user can see even if the window is half-hidden behind Xcode. Offering to stash changes for the user if they’re switching branches and have unsaved changes.

If the user selects three “unknown” (aka new) files and asks you to stage them, don’t just abort with Git’s standard error saying that they aren’t under version control! Try to anticipate what the user wanted. Show a window with a list of the offending files and offer to automatically stage them (with checkboxes next to them to turn off ones they might not have wanted to commit).

If a user tries to commit a binary file that has its executable bit set, maybe ask for confirmation in case they’re accidentally checking in the build products, and offer to add the file or one of its enclosing folders to the .gitignore file.

If the user tries to amend a commit, be smart and warn them from changing history that’s already been pushed. But don’t warn them needlessly. Can you check if any remote is ahead of this commit to detect whether the user has already pushed the commit to be rewritten? If not, it’s safe, just let them do it.

Remote Possibility of Supporting a Workflow

I’ve mentioned how we need to try to support the user’s workflow more and how the server is under-served. This also applies to setup. One of SourceTree’s standout features is that it lets you not only enter your Github or Bitbucket URL, but also shows you lists of your remote repositories.

You can set a default folder where your programming stuff goes, and then just select one of your remote repositories and click “clone”, and poof, it checks it out, adds a bookmark for it, and opens it in a window and you’re good to go. Heck, Git Tower even lets you specify the address of an image file in your repository to represent it in the list for quicker scanning.

Why has no VCS GUI added a Coda-style project list and automatically looks for project files and their application icons in a checkout to pre-populate the icon?

Re-open the repositories (yes, users may want to open several at once, deal with it!) the user had open when your app was quit. And for heaven’s sake, why are there VCS developers who don’t know how to make their application accept a folder via drag & drop on its application icon in Finder or the dock so I can quickly open a working copy that’s right there in front of me without having to wait for an open panel to open up?

Promise to be Better

I’m sorry, this has turned into a rant there. But the fact is, there are so many VCS applications, yet most simply expose the commands of their command line equivalents. Why do so few protect me from commonly made mistakes and focus on what me and my colleagues want to achieve instead and support us in that?

How can products connected to servers be so asocial?

Raw graphics output in Linux: Part 2

In Part 1 of this series, we’ve set up a command-line Linux in the VirtualBox emulator with support for direct frame buffer access, the git version control system and the clang compiler. Now let’s use this to draw graphics to the screen “by hand”.

Getting the code

The code we’ll be using is on my Github. So check it out, e.g. by doing:

mkdir ~/Programming
cd ~/Programming
git clone 'https://github.com/uliwitness/winner.git'

Now you’ll have a ‘winner’ folder in a ‘Programming’ folder inside your home folder. Let’s build and run the code:

cd winner
sudo ./winner

Screen Shot 2015-10-03 at 16.48.13

This code just drew a few shapes on the screen and then immediately quit. The Terminal was rather surprised about that, so just prints its last line on top of that.

How to access the screen

It took me a bit of googling, but eventually I found out that, to draw on the screen in Linux, you use the framebuffer. As most things in Linux, the frame buffer is a pseudo-file that you can just open and write to. This pseudo-file resides at /dev/fb0, and is the whole reason for the extra hoops we jumped through in Part 1 because a minimal Ubuntu doesn’t have this file.

So if you look at the file linux/framebuffer.hpp in our winner subversion repository, it simply opens that file and maps it into memory, using the ioctl() function and some selector constants defined in the system header linux/fb.h to find out how large our screen is and how the pixels are laid out.

This is necessary, as at this low level, a screen is simply a long chain of bytes. Third row chained after second row after first row. Each row consists of pixels, which consist of R, G, B and optionally alpha components.

By mapping it into memory, we can use the screen just like any other block of memory and don’t have to resort to seek() and write() to change pixels on the screen.


Since computers are sometimes faster when memory is aligned on certain multiples of numbers, and you also sometimes want to provide a frame buffer that is a subset of a bigger one (e.g. if a windowed operating system wanted to launch a framebuffer-based application and just trick it into thinking that the rectangle occupied by its window was the screen), the frame buffer includes a line length, x-offset and y-offset.

X and Y offset effectively shift all coordinates, so define the upper left corner of your screen inside the larger buffer. They’re usually 0 for our use case.

The line length is the number of bytes in one row of pixels, which may be larger than the number of pixels * number of bytes in one pixel, because it may include additional, unused “filler” bytes that the computer needs to more quickly access the memory (some computers access memory faster if it is e.g. on an even-numbered address).

Actually drawing into the frame buffer

The actual drawing code is in our image class, which doesn’t know about frame buffers. It just knows about a huge block of memory containing pixels, and its layout.

The main method in this class is set_pixel() which calculates a pointer to the first byte of a pixel at a given coordinate, and then, depending on the bit depth of the pixels in the bitmap, composes a 2-byte (16 bit) or 4-byte (32 bit) color value by filing out the given bits of our buffer.

All other drawing methods depend on this one:

Drawing rectangles

If you look at fill_rect, it simply takes a starting point (upper left corner of the rectangle) and then fills rows of pixels with that color.

To draw a frame around a rectangle is almost the same. We simply fill as many top and bottom rows as our line width dictates, and the rows in between get filled with a pixel (or whatever our line width is) at the left and right of our rectangle.

Drawing lines

Drawing one-pixel lines involves a tad of basic maths, but it’s nothing that you couldn’t get from a quick glance at Wikipedia. You take the line equation called the “point-slope-form”.

Then you calculate the line’s slope based on your start and end point. If the line is more horizontal than vertical, you loop over the X coordinate from start to end and use that and the slope to calculate the corresponding Y. If it is more vertical than horizontal, you loop over the Y coordinate to get the X instead.

Now, if you use this naïve approach, you may get small gaps in the line, because lines work with fractional numbers, while our computer screen only has full, integer pixels. This is why this example uses a variation on the same process that was invented by someone named “Bresenham”, which keeps track of the loss of precision and adds pixels in as needed.

Now drawing a line of more than one pixel width is a little harder. You see, lines are really infinitely thin, and don’t have a width. When you draw a line of a certain width, what computers usually do is either draw a rotated rectangle that is centered over the line and is as long as it is, and as wide as your line width, or it simply rubber-stamps a filled square or circle of the line width centered over each point on the line, which gives a similar look.

I essentially go with the latter approach in this example, but since I plan to eventually support different opacity for pixels, I do not want to draw whole boxes each time, because they would overlap and a 10% opaque line would end up 20% opaque in every spot where they overlap. So I just detect whether a line is mainly horizontal or vertical, then draw a horizontal or vertical 1 pixel line of the line width through each point.

This isn’t quite perfect and gives diagonal lines a slanted edge, and makes them a bit too wide, so I eventually plan to at least change the code so the small lines are drawn at a 90° angle to the actual line you’re drawing. But that’s not done yet.

Drawing circles

Again, I just get the equation for circles off Wikipedia. It says that r2 = (x-centerX)2+(y-centerY)2. Where “r” is the radius of the circle you want to draw, x and y are the coordinates of any point which you want to test whether it is on the circle, and centerX and centerY are the center of the circle.

Once you know that, you can draw a circle like you draw a rectangle. You calculate the enclosing rectangle of our circle (by subtracting/adding the radius from/to the center point) and then, instead of just drawing the rectangle, you insert each point into the circle equation. If the right-hand-side equates to r2 or less, the point is in the circle, and you can draw it, otherwise you skip this point.

Drawing the outline of a circle is just a specialized version of filling it here. Instead of checking whether the equation comes up as < r2, you also check whether it is greater than (r -lineWidth)2. So essentially you’re checking whether a point lies between two circles, the inner edge of your outline, and the outer edge of it.

This is probably not the optimal way to draw a circle, but it looks decent and is easy enough to understand. There are many tricks. For example, you could calculate only the upper right quarter of the circle, then flip the coordinate horizontally and vertically around the center and thus draw 4 points with every calculation. Bresenham even came with an algorithm where you only calculate 1/8th of a circle’s pixels.


The library doesn’t do ovals yet, but I think they could be implemented by using the circle equation and multiplying the coordinate of the longer side of the surrounding rectangle by the ratio between width and height. That way, your coordinates are “projected onto a square”, in which you can use the circle equation.

There are probably more efficient ways to do this.

Drawing bitmaps and text

To draw a bitmap (or rather, a pixel map) is basically a special case of rect drawing again. You take a buffer that already contains the raw pixels (like letterA in our example main.cpp). For simplicity, the code currently assumes that all images that you want to draw to the screen use 32-bit pixels. That also allows us to have a transparency value in the last 8 bits.

It simply draws a rectangle that is the size of the image, but instead of calling set_pixel() with a fixed color, it reads the color from the corresponding pixel in the pixel buffer we are supposed to draw. It also only draws pixels that are 100% opaque.

Text drawing is now simply a special case of this. You create a bitmap for every letter, then when asked to draw a certain character, load the corresponding bitmap and draw that. Of course, serious text processing would be more complex than that, but that is the foundational process as far as a drawing engine is concerned.

You’d of course need a text layout engine on top of that to handle wrapping, and other code to e.g. combine decomposed characters. Also, if you wanted to support the full Unicode character set (or even just all Chinese glyphs), you’d probably want to make your look-up happen in a way that you don’t need to load all bitmaps immediately, but can rather lazy-load them as they are used.


When we later implement our own window manager, we will need to be able to have windows overlap. To do that, we need to be able to designate areas as “covered” and have set_pixel() just not draw when asked to draw into those.

This is not yet implemented. The general approach is to have a bitmap (i.e. a pixel buffer whose pixels only occupy 1 bit, on or off) of the same size as our pixel buffer that indicates which pixels may be drawn into (usually that’s called a “mask”).

There are of course various optimizations you can apply to this. The original Macintosh’s QuickDraw engine used a compressed form of a bitmap called a “Region”, which simply contained entries for pixels in each line indicating the length of each color. I.e. “5 pixels off, 10 pixels full”. Some graphics engines simply only allow to clip to rectangles (which can be described by 4 coordinates). If all your windows are rectangular, that is sufficient.

The only clipping the image class currently implements is that circles that fall off any of the edges get clipped, and that rectangles and bitmaps that fall off the bottom or right edges get clipped. The way rectangles are currently specified, it is impossible to have them fall off the left or top, as that would require negative coordinates.

If you currently try to draw outside the image’s defined area using set_pixel(), you will corrupt memory. For a shipping drawing system you’d want to avoid this, and we’ll get to this once we implement a higher-level drawing system on top of this one that deals with clipping, coordinate systems and transformations.

Raw graphics output on Linux: Part 1

In my quest to understand better how my computer works, I decided I want to write a very minimal window server. The first step in that is to create something that performs raw graphics output to the screen, directly to its back buffer.

So, as a test bed, I decided to grab the VirtualBox emulator and install Ubuntu Minimal on it. Ubuntu Minimal is a (comparatively) small Linux that is still easy to install, and will provide the graphics drivers we’ll be talking to, and a file system and a loader to load the code to run.

If you just want to know how drawing itself works, feel free to skip to Part 2 in this blog series.

Setting up the virtual machine

Setting up a VM is fairly self-explanatory with the setup assistant in VirtualBox. It has presets for Linux and even for various Ubuntus, and most of the time the defaults are fine for us:

Screen Shot 2015-10-03 at 01.15.15

Screen Shot 2015-10-03 at 01.15.44

Screen Shot 2015-10-03 at 01.15.51

Screen Shot 2015-10-03 at 01.16.06

Screen Shot 2015-10-03 at 01.16.19

I’m choosing to name the VM “Winner”, short for window server, but you can choose whatever name you like:

Screen Shot 2015-10-03 at 01.16.34

Now you have a nice emulated empty computer

Screen Shot 2015-10-03 at 01.16.50

Now, we need to tell it to pretend that the mini.iso Linux disk image file we downloaded from Ubuntu was a CD inserted in its optical drive by selecting the “Empty” entry under the CD, then clicking the little disc icon next to the popup on the right to select a file:

Screen Shot 2015-10-03 at 01.17.14

Note that you would have to use the “Choose Virtual Optical Disk File…” item, I have the mini.iso entry in here already because I previously selected the file.

Screen Shot 2015-10-03 at 01.17.28

Screen Shot 2015-10-03 at 01.17.40

Now you can close the window using the “OK” button and click the green “Start” arrow toolbar icon to boot the emulated computer.

Installing Ubuntu Minimal

Screen Shot 2015-10-03 at 01.18.35

Ubuntu will boot up. Choose “Command-Line install” and use the arrow and return keys to navigate through the set-up. Pick your language, country and keyboard layout (if you’re on a Mac, choose to tell it instead of having it detect, and pick the “Macintosh” variant they offer):

Screen Shot 2015-10-03 at 01.18.49

It will then churn a bit:

Screen Shot 2015-10-03 at 01.21.03

And then it will ask you to name your computer:

Screen Shot 2015-10-03 at 01.21.24

You can pick pretty much any name for your emulated home computer, it doesn’t really matter for what we are doing. I picked “winner”.

Then it will ask you to choose the country you are currently in, so it can pick the closest server for downloading additional components:

Screen Shot 2015-10-03 at 01.21.35

And if they have several servers in your country, they’ll offer a choice. Just pick whatever it offers you, it’ll be fine.

Screen Shot 2015-10-03 at 01.21.58

Then it will ask you if you need to use a proxy. Unless you’re in a weird restrictive company or university network or trying to get around an oppressive government’s firewall, you can just leave the field empty and press return here to indicate no proxy is needed:

Screen Shot 2015-10-03 at 01.22.18

Then it will churn some more, downloading stuff off the internet etc.:

Screen Shot 2015-10-03 at 01.22.42

Now it’s time to set up your user account, password (twice) etc.:

Screen Shot 2015-10-03 at 01.23.39

Screen Shot 2015-10-03 at 01.23.45

In this emulator, we don’t need an encrypted hard disk (If you need it, your computer’s hard disk is probably already encrypted, and your emulated computer’s files are all stored on that anyway).

Screen Shot 2015-10-03 at 01.24.40

Then it will ask you about some system clock settings (the defaults should all be fine here:

Screen Shot 2015-10-03 at 01.25.06

Then it will ask how to partition and format the hard disk. You’re not dual-booting anything, the emulated computer is for Linux only, so just let it use the entire disk:

Screen Shot 2015-10-03 at 01.25.31

And don’t worry about selecting the wrong disk, it will only offer the emulated hard disk we created. Tell it to create whatever partitions it thinks are right:

Screen Shot 2015-10-03 at 01.26.02

And it will churn and download some more:

Screen Shot 2015-10-03 at 01.26.11

Since we may want to keep using this for a while, let’s play it safe and tell it to apply any important updates automatically:

Screen Shot 2015-10-03 at 01.36.03

And when it asks if it is OK to install the boot loader in the MBR, just say yes:

Screen Shot 2015-10-03 at 01.38.22

Again, there is no other operating system inside this emulation, they’re just being overly cautious because so many linux users have weird setups.

For the same reason, you can just let it run the emulator with a UTC system clock as it suggests:

Screen Shot 2015-10-03 at 01.38.38

That’s pretty much all. Tell it to restart, and quickly eject the CD disk image by un-checking it from your “Devices” menu:

Screen Shot 2015-10-03 at 01.38.39

Setting up Ubuntu

Ubuntu is pretty much ready to go. You’ll have a neat command line OS. However, for our purposes, we want to have graphics card drivers. Since this is the minimal Ubuntu, a lot is turned off, so let’s turn that back on again and install some missing parts that we want for our experiments. Log in with your username and password and edit the configuration file /etc/default/grub which tells the bootloader what to do:

Screen Shot 2015-10-03 at 12.22.58

If you’re unfamiliar with the Unix Terminal, just type sudo nano /etc/default/grub and enter your password once it asks. sudo means pretend you’re the computer’s administrator (as we’re changing basic system settings, that’s why it wants your password). nano is a small but fairly easy to use text editor. It shows you all the commands you can use at the bottom in little white boxes, with the keyboard shortcuts used to trigger them right in them (“^” stands for the control key there):

Screen Shot 2015-10-03 at 12.23.33

Most of the lines in this file are deactivated (commented out) using the “#” character. Remove the one in front of GRUB_GFXMODE to tell it we want it to use a graphical display of that size, not the usual text mode that we’re currently using.

Save and close the file (WriteOut and Exit, i.e. Ctrl+O, Ctrl+X in nano).

Now usually this would be enough, but Ubuntu Minimal is missing a few components. So now type sudo apt-get install v86d. This tells Ubuntu to install the v86d package that does … something. If you left out this step, you would get an error message telling you that v86d doesn’t work on the next step. Confirm that you want to install these whopping 370kb of code by pressing “y” when asked. It will churn a bit.

Type in sudo modprobe uvesafb. The graphics drivers on Linux all implement the so-called “framebuffer” commands. That’s what “fb” here stands for. VirtualBox emulates a “VESA” display, and “uvesafb” is the modern version of the “vesafb” graphics driver you’d want for that. So we’re telling our Kernel to load that module now.

If all works, all that you should see is that your screen resizes to 640×480, i.e. becomes more square-ish:

Screen Shot 2015-10-03 at 12.25.54

Now we don’t want to manually have to activate the frame buffer every time, so let’s add it to the list of modules the Kernel loads automatically at startup. Type sudo nano /etc/initramfs-tools/modules to edit the module list and add “uvesafb” to the end of the list (in my case, that list is empty):

Screen Shot 2015-10-03 at 14.51.45

The professionals also suggest that you check the file /etc/modprobe.d/blacklist-framebuffer.conf to make sure it doesn’t list “uvesafb” as one of the modules not to load. If it does, just put a “#” in front of it to deactivate it.

Screen Shot 2015-10-03 at 12.51.22

Now run sudo update-initramfs -u which tells the system to re-generate some of the startup files that are affected by us adding a new module to the list. It will churn for a moment.

Now we need a nice compiler to compile our code with. There’s probably a copy of GCC already on here, but just for kicks, let’s use clang instead, which gives nicer error messages. Enter sudo apt-get install clang:

Screen Shot 2015-10-03 at 12.26.34

Finally, we need a way to get our source code on this machine, so let’s install the git version control system:

sudo apt-get install git

OK, now pretty much everything we need is set up. Part 2 in this series will get us to actually running some code against this graphics card driver.

You can shut down your virtual Linux box until you’re ready to try Part 2 by typing sudo poweroff.

MacBook Holster

Closed MacBook holster with iPad, MacBook, power supply, adapter & 3G dongle inside.

Closed MacBook holster with iPad, MacBook, power supply, adapter & 3G dongle inside.

A few years ago at NSConference, I bought an Incase MacBook Air 11" sleeve off another attendee. I’d been unsuccessfully looking for a good backpack to hold my MacBook Air and not add much bulk. So I had to make one.

This is not a very complicated task that needs much description. All I did was buy a roll of backpack strap, a quick release buckle (so I don’t have to wind out of it when it’s inconvenient), a matching tri-glide slider, and a shoulder pad.

Yes, these things really all fit into this MacBook sleeve.

Yes, these things really all fit into this MacBook sleeve.

I took a needle and thread and manually sewed the strap to the upper left and lower right corners, making sure the ends were at an angle so that they’d come off the backpack straight. I had to do several rows of sewing to make sure it was attached strongly enough to hold the weight of not only the MacBook, but also the power brick.

Now while I wish I’d had a sewing machine at the time to make the stitch nicer, you don’t usually see that side of the backpack, so at least there’s that consolation. Once I’d looped the strap through the pad and slider and sewed the end that went around the slider’s middle bar onto itself (to get me adjustable strap length), it turned out to be strong enough for not just the MacBook and power supply, but also an iPad, plug adapters and a 3G dongle.

The sewing would have been prettier with a machine.

The sewing would have been prettier with a machine.

I wonder why no backpack manufacturer makes small holsters like this. I’ll probably add a zipper to the outside pocket of the sleeve, but apart from that it’s a very useful bag now.

NB – The photos are of a MacBook, but I used to keep a MacBook Air 11" in there and can assure you that it fits just fine, though the bulkier power supply requires a bit of thrusting to get it in the outside pocket.

How I prepare for talks

Among Jaimee Newberry’s fun daily video diary entries is an especially useful one on how she prepares for giving talks. Graham Lee offered his take on preparing for giving talks.

That got me thinking. What do I do? I’m more a Jaimee-Talker, but I don’t really do a single idea in a talk that often. I have a note in the Notes app with all my talk ideas where, every time I encounter a problem or question, and every time I e.g. find myself answering a question on Stack Overflow or at work, I make a note of it.

I try to group them by topic, and that usually quite naturally turns into a way too long talk outline. Then when it’s time to give a talk, I pick the choice bits out of one or more of these outlines, and make that my outline, sometimes changing the focus. E.g. once I had the mandate to add a missing beginners’ talk on Quartz to a conference, so I took the most basic, most practical graphics issues from my notes, prefixed them with a general tutorial on how the Quartz is organized and that was my initial outline.

Then I built a first rough slide deck based on those notes and just started holding the talk using Keynote, with the audio recording function on, in the privacy of my own home. Sometimes, once the talk has advanced a bit, I even (similar to Jaimee) set up my iPhone or iPad to record myself.

So, how does a talk “advance”? Well, it’s simple. First and foremost, I make notes for every slide about the things I’ve said. Also, at some points giving a talk, I will get stuck. Or I will repeat something I’ve already said earlier. Or I’ll explain something in words that would really need an illustration. At that point I drop into Keynote and either re-arrange the slides, or do a first rough illustration.

Once I’ve done this a few times, the talk will feel much more fluid, but will be running horribly long. So I try to do a full run-through without interruptions and time it. Once I have my time, I’ll try to find things I can cut and mercilessly cut them. Things that feel like a detour, or boring, or too trivial. Things that the intended target audience would know already. But sometimes I also realize that I haven’t explained something that needs explaining and add a slide.

Then I do the talk again. Rinse and repeat, until the timing and flow is right. When the slides have stopped moving and disappearing, and I’m happy with what’s in the talk, I’ll start refining the illustrations. Adding builds that reflect my description. Usually that on one hand forces me to go through these slides at “the speed of build”, but it also shortens my descriptions very much, so often it evens out.

If it doesn’t, I might have to cut some more slides, or find a way to simplify what is there to make it go faster.

As you can tell, this is an approach best suited to more technical talks. More “philosophical” talks sometimes can be analyzed enough that this approach works. Other times, they’re more like stories, making them harder to re-arrange and to cut out stuff. I generally still use the same approach, but it doesn’t work as reliably. What can I say, it’s a work in progress, and I’ll work at sucking less at the not-a-story-not-technical-either kind of talks as I can.

I haven’t mentioned the title yet. Usually, it comes at the end. I have a working title (e.g. Memory Management Fundamentals), and then look at what is actually in my talk and pick a better name (e.g. On graph paper and memory).

Sometimes, I need to provide a title when I sign up as a speaker. As I already have the notes, I’m usually pretty good at picking a title that works. Sometimes they let me change it afterwards. Sometimes they don’t, and I go for that title with a byline that narrows it down to what the new title would be. (Don’t put a different title on your first slide than is announced in the programme, attendees won’t find you)

One thing I sometimes do in the end is I record myself doing the final talk (with the iPhone camera or whatever) and watch myself doing it, watching out for how I look. Am I scratching my nose? Do I say “umm” a lot? Then I try to remember to turn that down.

How to install Windows 8.1 on a Mac

It is not quite trivial to buy Windows as a download and get it onto your Mac. I’ve found a workaround, but it takes a lot of time, and requires you to download about 7GB of data via the internet.

Disclaimer: I do not guarantee that these steps will work. They worked for me in late June 2015, YMMV. Do not blame me if you buy a download version of Windows and then can’t install it. Also, be sure to make a backup of your entire hard disk/SSD before you do this. You will be resizing partitions and doing other things that could lead to accidental loss of data.

The Problem:

  • The microsoftstore.com Windows 8 download is a small .exe file containing a downloader application that needs an already-installed Windows to work.
  • Macs these days don’t have a DVD drive, so you’d need to buy/borrow one to be able to use install DVDs mailed to you.
  • Boot Camp Assistant assumes a physical DVD or an ISO disk image, it obviously can’t run the .exe under MacOS.
  • I was unable to get the .exe downloader to run under CrossOver on MacOS.

My workaround:

  • Download the trial of Windows 8.1 for Enterprise as an ISO image from Microsoft (need to create an MS account which you will also later need to buy the download)
  • Use Boot Camp Assistant to install that onto an empty USB stick that is at least 4GB (not just the Apple-specific drivers, check the option for the full install partition). The stick will be formatted using Windows’ old FAT32 format, which both Mac and Windows can read and write.
  • ~100GB (at least 60) is a good size for the Windows partition to add to your internal hard disk/SSD.
  • Boot Camp will now churn a while and copy the files from the ISO on your USB stick, and will also download the newest hardware drivers from Apple and make sure those get installed as well. Time for breakfast.
  • When Boot Camp Assistant reboots, hold down the option key and select the “EFI Boot” entry to make sure you don’t end up back in MacOS.
  • You will find yourself in the standard Windows installer now. Follow its directions. On Retina Macs, it will be at a tiny 1:1 resolution. Bring a magnifying glass.
  • When asked where to install the Boot Camp partition, find the one named “BOOTCAMP” and select it. Remember what else it says (e.g. “Disk 1 Partition 4”).
  • If the Windows installer complains about the partition not being formatted as NTFS, Click the “Format” button underneath the list, but don’t do any repartitioning with the Windows tools, you’d only disturb the fairy dust that Boot Camp Assistant has applied and break booting back into MacOS.
  • Select the reformatted disk (which has now lost its “Bootcamp” name) and click “Next” to start installing the trial.
  • Make lunch while pretty colorful screens rotate through and Windows is set up for you in the background.
  • Run through the Boot Camp installer that runs in Windows after the standard Windows installer has finished.
  • Once you have a working trial install of Windows, buy the download .exe from microsoftstore.com, if you haven’t already. Unless they say they don’t, installers include both old-style 32-bit versions and the 64-bit versions needed for Macs, don’t worry.
  • Run the .exe you just bought while you’re running the Enterprise Windows Trial to create a proper ISO with your purchased Windows 8.1 on it.
  • Back up that Windows.iso and its license key somewhere safe.
  • Copy the Windows.iso onto the USB stick so you can get at it from MacOS.
  • Note down the Windows license key somewhere, you’ll need to type it in in a moment.
  • Boot back into MacOS and run Boot Camp Assistant a second time to remove the trial partition. (BCA doesn’t let you run it again on an existing partition, so you’ll have to nuke and recreate)
  • Run Boot Camp Assistant a 3rd time, this time using the new ISO, not the trial, to get the desired full Windows install. Remember to hold down the Alt key at startup to select “EFI Boot” or you’ll just end up back in MacOS.
  • When the standard Windows installer comes up, you’ll need to enter your Windows license key this time. From then on, the install will be identical to the trial install.
  • Your Yak is shaven clean as a baby’s bum.

Note: In theory, it should be possible to run the .exe under the trial to directly install Windows 8.1 on top of the trial instead of generating the ISO, but I didn’t want to risk it somehow generating a mix of the trial and purchased Windows installs, or eliminating the Boot Camp-supplied drivers & programs, so I decided to nuke the trial once I had the ISO and start fresh. Whatever you do, generate and back up the ISO so you don’t need to request another trial from MS when you inevitably want to reinstall Windows at a later time, even if you then use the .exe and not Boot Camp for the second installation.

Thanks:Thanks to Sören for pointing me at the Windows trial version that made this possible.

Microsoft supports UIKit


This week’s Build conference held a big surprise: Microsoft announced that they’ve built a UIKit compatibility layer for their various flavours of Windows.

Now I’m mainly a Mac developer and only hear of Windows things from friends and colleagues at the moment (the last time I did Windows work was around Windows XP), but my impression so far was that MS was frantically searching for a new API.

I don’t remember all occurrences, but I remember them announcing Silverlight, and .NET with WPF, and Windows RT that only supported the new APIs, and all sorts of things to then cancel them again.

So my impression as an outsider is that new APIs weren’t trustworthy and MS would always fall back to supporting their old API main-line that they carry around for compatibility reasons anyway.

Announcing UIKit and Android support actually makes a lot of sense in that context:

Although it appears to acknowledge that Windows Phone really didn’t take off, it does solve the catch-22 that MS found themselves in: Lack of apps. In an ideal case, they’ll now get all iOS apps Apple sells, plus the ones Apple rejected for silly reasons, plus those Android apps that iOS users long for.

If this gambit pays off, MS could leap-frog Apple *and* Android.

It also increases trust among developers who are sticking to ancient API: iOS and Android are the only modern APIs that Microsoft could implement that developers would confidently develop against after all these false starts, because even if MS dropped support for them, they’d still have the entire iOS/Android ecosystem to deploy against. So coding against UIKit for Windows Phone is a reasonably safe investment.


Of course, the elephant in the room here is Apple’s recent move to Swift. Now, given that Apple’s frameworks still all seem to be Objective-C internally (even WatchKit), I don’t think MS have missed the train. They might even pick up some Swift critics that are jumping Apple’s ship by supporting Objective-C.

But Swift damages the long-term beauty of MS’s “just call native Windows API from Objective-C” story. They will have to bridge their API to Swift (like Apple does with some of their C-based API right now), instead of getting people to use more and more classic Windows API in their Cocoa apps until the code won’t run on iOS anymore.

Still, that’s a small aesthetic niggle. MS already have a code-generator back-end that they can plug any parser onto, and Swift doesn’t appear to be a particularly difficult language to parse. In any event, parsers are easier than good code generation. For MS to create a Swift compiler is a solved problem, and I’d be surprised if they weren’t already working on it.

Of course, if MS had known about Swift when they started their UIKit for Windows, would they still have written it in Objective-C? Or would they have just written it in Swift with a bridging header?

So given the situation MS have managed to get themselves into, this sounds like it might be a viable solution to survive and, maybe, even come back from again. Still, it is an acknowledgement of how MS has fallen, that they need to implement a competitor’s API on their platform.

World of Warcraft


World of Warcraft is probably the MMORPG that brought this type of game into the mainstream, and it’s still live and being played today.

So I thought I’d try it out. Luckily, a friend used to be a veritable WoW-fiend doing high-level raids, so I had a pro guiding me through the beginning, suggesting good races to pick etc.

WoW is great, and of course it’s an institution, but oddly, I found I did not enjoy it enough (compared to Star Trek Online, which I’m more familiar with). Because that is rather weird, I thought I’d try to list the things I like and dislike about WoW, in the hope that it’ll make me more clear about what kinds of games I would like to make.

Mission duration

The thing that initially attracted me to STO was that it had story missions, which felt almost like episodes of a TV series. At least as far as I played, WoW’s missions are a lot shorter. Where in STO I play what feels like half an hour to finish a mission, from accepting it to getting the rewards, WoW favors shorter missions of a few minutes, meaning every time I get a tall un-moveable window containing a wall of text (“lore”) and a button to accept a mission or complete.

Even in STO (where it is at least moveable) I do not like mission windows. They take me out of the story, even though I enjoy the gratification of successfully levelling up. And it just feels hollow to get a big achievement for going from point A to point B.

While STO’s mission accept and completion dialogs are similar, they occur much less often, and one mission consists of several smaller quests in the WoW style. This gives them the opportunity to design the dialog before those missions as a real conversation, not just a monumental text dump. I guess it’s a matter of personal preference, where I fall on the side of conversations.

Beginner Help

Both WoW and STO have little help popups that introduce you to using the in-game UI. But like a lot in STO (which came later, so I’m really not blaming WoW for that), their implementation feels more like it would feel in a real computer program.

You can click all of them to either advance in the directions it gives you (which means it will pop up e.g. your inventory to show you how to equip a weapon), or close a single-popup instruction.

The WoW ones, on the other hand, have no obvious way to dismiss them (I tried all three mouse buttons), and at least in my case have the habit of covering mission rewards (and since the mission window is also immovable, if the mission description doesn’t scroll, there is no way for me to read the rest).

Also, the help in STO is tiered. You create a new character, it shows you every hint exactly once. In WoW, I repeatedly get reminded that I just received a new item and seem to be pretty much forced to equip it right then and there to get rid of the tutorial popups.

Moreover, after each mission, I get a large banner in the middle of the screen, telling me to press M to see the map. Even if I can see where to turn in the mission perfectly fine in the mini-map. Even if this is my 10th mission.

One mistake STO makes with help I don’t want to repeat is that popups contain static text and sometimes only point at fixed locations. So if I move an ability from the default spot in the tray, I just get help pointing at the tray, not the actual ability. Also, any keyboard shortcuts the docs mention indicate the default (they point that out, though) so do not reflect any changes I may have made to key bindings in the settings.

UI performance

I play on a Mac. STO’s Mac port is done using Cider, which means I essentially run a Windows emulator. Also, their UI feels like it is written like the game itself, i.e. it calls back to the server for confirmation a lot.

While this is correct for actual gameplay and mini-games, it means that on a slow or busy connection (whether on my side or theirs) a lot of the UI loses clicks, even for parts that aren’t timing-sensitive like the confirmation panels when ending a mission or moving between sectors.

WoW on the other hand feels different. Buttons just behave like you’d expect them to, and if you click a button it triggers an action (the exception being if you accidentally right-click instead of left-click a button, in which case it highlights but then never does anything — it should just not highlight in the first place on a right click).

Icon design

The symbols in your tray in STO have a clean, “iconic” look, made up of simple glyphs and (after an update this year) following a system that makes it easy to tell apart the various groups of abilities and match up abilities with their icons.

WoW’s icons are less clean, more fancy, as you’d expect from a fantasy game. I think I’d get used to them if I played it some more, but they seem to be at the slightly less self-explanatory level STO’s were a year ago. WoW would benefit from an icon redesign, I think, but that’s a minor nitpick, but a big thing to keep in mind for one’s own game designs. Structure them like STO’s icons, even if I may choose a fancier style for a fantasy game.

Damage feedback

Especially STO spaceships make it kind of hard to detect when your character takes or deals damage. Since WoW mostly deals in living beings, they can provide much more obvious feedback about damage dealt or received, where your character shrinks back or similar to indicate you’re not doing too well. Such cues on the character models are much better than having to keep a health bar or “hull strength” indicator in peripheral vision.

In STO it often happens to me that my ship suddenly blows up because I didn’t pay attention to that bar and the damage model has not quite triggered yet and an enemy hits me with an especially strong shot. I do not enjoy One-shot-and-you’re-dead enemies in my games.

If I wanted to have an instant killer enemy for story reasons, I’d make sure there is a little cut scene before the fight starts where it demonstrates this weapon on an unsuspecting NPC, and also that the weapon has some sort of “I have you in my sights” indicator that gives me a chance to evade it.


When I fight in WoW, my character is constantly complaining “cannot do that yet”, “I have no target”, “not enough manna”, “this ability isn’t ready yet”. While I like the use of audio for feedback like this, it doesn’t help my immersion that my character constantly talks to me.

This is exacerbated by some misfeatures of the UI, where e.g. it doesn’t auto-target the next enemy, so usually I press buttons to swing the sword and the first press kills the enemy, but I don’t realize it because the animation takes its sweet time to make the character fall over, so I hit again, and hear “I have no target”.

Also, cool-downs and “manna” (or whatever power is used) are displayed separately in the UI. In STO, they’re one thing. If there is not enough power, the button for an ability stays inactive. In WoW, the button becomes active, but the manna bar is empty, and I get “Not enough rage” or whatever.

Now mind you, I’m aware that battle in STO is more real-time, like in shooters (it’s an action-RPG after all) whereas WoW follows the conventions of strategy games, where you click your enemy and then they’re supposed to be fighting it out while you watch. It’s about who you pit against who and what abilities you decide to use, not as much about each individual shot.

So maybe I just need to cool it down. Let the game play that part.

Range of actions

The way I play STO is fairly keyboard-heavy, although you can use the mouse like in WoW, and the tutorial describes e.g. how to fight using the mouse.

So usually my every interaction involves walking up to something until an action menu pops up or an ability becomes available, and then to trigger it.

WoW generally expects you to use the mouse. This leads to a weird dichotomy for me because my character is roughly near the object and I can click it with the mouse (“my hand is able to reach it”) but then my character complains “too far away”. Because I’m not yet in range.

There are indicators, mind you. The mouse cursor is B/W instead of color if you’re too far away, and your character tells you it needs to get closer, but again, it doesn’t help my immersion.

Although games already show lots of HUDs like mini-maps and ability trays and manna bars, I’m not sure if I’d want popups like in STO. I think LucasArts got it right here: When you clicked an item that was too far away, your character would simply walk up to it and then trigger the action.


WoW is definitely the more polished game. I’ve had one Mac where it would crash on launch, but whenever I got WoW to run on a Mac, it was solid. STO on the other hand, has crashed a lot for me on the Mac.

Also, WoW’s progressive downloading in the background is great. It shows you in the progress bar how much you need to start playing at all, how much they recommend so you don’t have to wait, and how much to have to hit the network the least while playing. And it seems to take only a few megabytes to start playing. The graphic design of the Battle.net client is also quite nice looking.

STO’s launcher OTOH shows you 9 different progress indicators before you can start playing, and even a fresh download from the server still requires an 8GB patch afterwards. And the progress indicator graphics are very eighties and in the Mac port text overflows the progress bar’s boundaries.

They may both be using HTML behind the scenes for all I know, but only STO feels like a web site with CSS bugs, WoW feels native.

Also, WoW’s buttons highlight and track properly and responsively, and feels like one application compared to STO’s three. And STO has glitches when switching levels where the level is drawn, and only then the load screen covers it, so you get a glimpse of every scene before you enter it. Not a very smooth transition.

WoW also has great in-game details, like rats and foxes running around for ambience, or little children coming up to you and asking you whether you really did all those things you did in the previous quest.

STO only has static characters standing in groups or on corners repeating the same phrase with your name filled in, like “It’s great to see you, Admiral” or “The Federation is doing its best to support you all here”. The infrastructure seems to be there, but I guess they can’t afford to script and implement that in most levels … ?

Trolls and griefers

I just had WoW spoiled by ending up in a cave where I was supposed to kill a spider queen that spawned about every 10 seconds. A few higher-level mages were camped out there and killing the spider as soon as it spawned, with a single shot from a distance.

The worst I’ve ever had happen to me in STO was one guy who kept moving his avatar in front of me and making it dance, or someone triggering a bomb next to me (which doesn’t have any effect beyond making a “poof” effect because outside designated PvP areas all players are on the same team). The solution? I switched to another instance, which is randomly assigned to each player, and identical to the one I was in before. It also takes along my mission progress. No onerous “moving my character to another realm”.

STO seems to generally be engineered to avoid conflict between players. Between players, there is not even collision detection. Most quests require you to trigger enemies, so you have the first chance to get them because your enemies can’t know when they’ll pop up and get in before you.

Also, most battle zones not only have more enemies than you could be expected to kill in a short timespan you’d need to troll someone, they also reduce the level of all characters in a certain area. So my Vice Admiral (60) fights on Nimbus III as Level 20, like everyone else, making it just as hard for the troll to kill the objective as for me. Chances are I’ll get a success in.


I don’t like that you need to explicitly select a Realm and that’s where your character is. Although at least in WoW there actually are several realms. Wildstar warns you to select the right realm, then only has a single one. Also, WoW lets you move a character to another realm.

In Defiance, the servers for Europe and North America are completely distinct, and if you log into the other server, you get a new, empty game. If you sometimes play with European friends and sometimes with US friends like I do, you need to level up two separate characters to be able to do that.

Realms are really a technical limitation that players should not be bothered with. Sure, barriers are needed so game areas aren’t overcrowded, but players should be able to change to another “shard” of the server at will to meet friends. Ideally, everyone who has someone else in their friend list should automatically be moved into the same shard. And most definitely should the user list be shared between all servers, so I can play my character whenever I want, and look up friends even if they’re on another shard.

Mind you, I’m not saying there shouldn’t be areas that are restricted to certain factions, or that everyone should just be able to walk into your home.

Update: Changed my opinion on using STO-style popups when I remembered that LucasArts adventures just had your character walk up to an object if it was too far to interact with. Made this less of a shootout and more focused on WoW with info from more other games strewn in.

Why the new MacBook is a success


To those of you following me on Twitter, it hasn’t been a secret that I’ve been waiting for a small, portable Retina MacBook for quite a while. Yesterday, Apple announced one, and it looks to be a good one. Since I’ve heard a lot of nay-sayers, I thought I’d point out some things that people may overlook.

Are you the target audience?

The new device is named “MacBook” and priced in the $1500 price range. Also, the old MacBook Air in the < $1000 price bracket is still available. This makes sense. The MacBook Air moved into the entry-level price bracket a couple years ago. Like with other Mac models, a Retina variant can't be made at that price profitably. So they'll keep the old variant for the price-conscious, as a way to attract new users who will then hopefully later upgrade to a more expensive model, or stay on at the low end. So the people looking for a cheaper (but not the cheapest) Mac are one target of this new machine. Another main feature was that it's slim and ultra-portable. So if you would be fine lugging around a 15 incher, you're definitely not the target audience. If you're looking for a powerful Mac to run scientific simulations or build large programs using Xcode, you already have the 13 inch Retina MacBook Pro. It is only slightly larger and packs the power and extensibility you need. So why the heck would Apple build a machine identical to those existing ones?

The new MacBook is the future replacement of the MacBook Air 11″. If you wouldn’t have bought that machine, why’d you be surprised that this machine at the top end of the price bracket is not for you? It sells like sliced bread, so there obviously are people who want a machine like that.

That ‘single’ port

I’ve seen many people complain about that single USB-C port. But when I look at my and friends’ usage patterns on the MacBook Air 11″, it turns out that most of the time you don’t use a port. Either you’re traveling, on a train or plane with it (if you didn’t, you’d probably be fine with a larger machine), in which case you wouldn’t have anyplace to plug it in anyway.

Or you’re at work, or in your apartment, in which case you’re stationary. So you probably already have an external display that provides power *and* serves as an Ethernet adapter and USB hub, or some other dock, around which your wired ethernet connection or clunky devices that you’d need USB ports for are arranged.

In that situation, only having a single USB-C plug to attach is actually the most convenient solution.

And since USB-C is an industry standard, there’s no doubt that USB keys with C-plugs are only a matter of time. At worst, you may carry along a tiny plug adapter for attaching a colleague’s old USB storage sticks, probably smaller than a 30pin to Lightning adapter. And you won’t even have to pay the Apple Premium(tm), because lots of third-parties will probably be making these, too.

And they did leave in the combined headphone/mic jack, which is probably the only other port I’d use with any regularity, and which might actually be needed while something else is plugged in.

That M-processor

The MacBook Air has never been known for being the high-end machine of the product line. The biggest of the new MacBook’s CPUs Turbo Boosts up to 2.9GHz. That’s faster than the old entry-level ones, but not quite as fast as the old i7 variant. But neither machine is problematic for the entry-level crowd they will eventually be serving. You can write text and e-mails, you can browse the web, you can watch full HD movies. Heck, Photoshop will not be super-fast but probably be fine, and while Xcode may take some time, it will still run and get the job done. That’s not a change from before.

That Webcam

The Webcam is only 480p. That’s not much. Then again, if you’re calling home, you probably know what your kids or parents look like, and you’ll be able to make out whether the facial expression meant that phrase was irony or serious. That’s really all video calls are for. If you’re at a hotel, picture quality will probably be reduced to horrible macroblocks anyway, due to the slow internet. And very likely, you already own an iPhone or iPad, so you can always use their camera.

And if you actually want to take video, you’re likely using an external camera anyway, and not your webcam, which picks up the vibrations from your typing and has a limited angle that requires you adjust your screen.

Non-clicky trackpad

This easily had me worried. I’ve never been able to make touch-to-click work on old trackpads. I always caused clicks at the start of quickly moving the mouse, accidentally trashing or moving files in the process. However, with force-touch, this might work out. If it can detect the difference in pressure, it might have a decent threshold between a strong touch or a soft click. I haven’t actually used one yet, but all the hardware for this to work seems to be present.

The new power supply

I’ve had some issues with my MacBook Air 11″‘s power supply. My cable usually goes sideways off the hotel bed, so the plug being angled backwards means it bends off right after the plug. The rubber sleeve on the cable usually starts fraying and breaking after a while. Since the cable is attached permanently to the power supply, that made for some expensive replacements.

In addition, the MagSafe connector kept unplugging when I didn’t want it to. Oddly, when someone actually stepped on the cable, the sudden force would cause the lightweight MacBook to spin around first, so its back was facing in the direction of the cable. This in turn meant that the L-shaped MagSafe plug now functioned as a hook and would not unplug. In short, MagSafe never worked for me on that machine (It’s fine on my old 15″ MBP, because that weighs enough).

As far as I can tell from Apple’s web site, the new power supply is like an iPhone power bug. There is a separate USB-C cable with plugs on both ends now. The USB cable has a straight, not angled plug. As such, not only would it not get bent when the cable goes off sideways, someone pulling on it would also no longer turn it into a fishing hook. It’s much more likely that it’d unplug under force now than before. And if it doesn’t, it won’t be any worse than before for me.

Now if this power supply and these connectors make their way to the larger MacBooks, the lack of MagSafe may become an issue. But for this device? Not for me.

The Retina Display

If I read Apple’s web site right, the new MacBook has a 2304×1440 screen. At traditional scale factors, that would make it 1152×720@2x or 1536×960@1.5x. That first resolution would be a show-stopper for me. Back when the 11″ MacBook Air came out, most applications did not expect a new Mac to have a screen as small as 768. Lots of windows didn’t fit onscreen, with the “OK” buttons at the bottom ending up offscreen. 48pt less is worse, and will probably cause that problem again.

You can’t run a 12″ screen at 2304×1440 either. The menu bar would be tiny. You’d spend all day bent over the tiny laptop and ruin your back. However, the 1.5x resolution would be fine for me. The screen is a bit larger, so this should end up only slightly smaller than the old 11″ MacBook Air.

Is it ideal to run this device at 1.5x? No. Is it an improvement over the old non-Retina? Yes. More space to work with, and more pixels for text rendering.

I can’t say for sure that these resolutions would be available, though. Apple’s documentation mentions 2304×1440 at 226ppi, and then a number of “Scaled” resolutions which are really weird sizes like 1280×800. I presume these are just the additional resolutions like you’d find them under “Scaled” in the “Displays” System Preference pane, and that we’ll still have the 1x, 1.5x and 2x switches like we have on current Retina Macs.

That Keyboard

In general, I like better keyboards, and as a fast but not very precise typist laud the idea of a more stable key cap. The only issue I have with this one is that the new single-assembly butterfly mechanism seems to be using a thinner piece of material (and apparently plastic) as a hinge/joint of sorts. Usually that means that, after some wear and tear, this thinner piece will break. That would mean this device is engineered to break.

The verdict

In my not so humble opinion, people who are complaining are not real Scotsmen… err … not the target audience for this machine. You can still get one of the others, even non-Retina MacBook Airs on their way out. The features Apple cut or compromised on are the ones that will least affect the typical user. It’s a good machine. I’ll probably buy one once I’ve answered that final, all-important question:

… Space Grey or Gold?

Death to Booleans!


One of the most annoying aspects of most C-descended languages is that function calls become kind of unreadable when they have more than a single boolean parameter. The calls start looking like:

    OpenFile( "/etc/passwd", true, true, false );

and you have no idea what effect each boolean actually has. Sometimes people solve this by naming all parameters in the function name, but of course that doesn’t permit adding more optional parameters to a function later, because you’d have to change the name:

    OpenFilePathEditableSaveSavingAllowNetworkURLs( "/etc/passwd", true, true, false );

A disciplined programmer will solve this by adding an enum and using that instead of the booleans:

    enum FileEditability { kReadOnly, kEditable }
    enum FileSafeSaveability { kSafeSave, kOverwriteInPlace }
    enum FileAllowNetworkURLs { kFileURLsOnly, kAllowNetworkURLs };
    void    OpenFile( const char* path, enum FileEditability fe, enum FileSafeSaveability fs, enum FileAllowNetworkURLs fu );

Or maybe just make all booleans a “flags” bitfield:

        kEditable = (1 << 0),
        kSafeSave = (1 << 1),
        kAllowNetworkURLs = (1 << 2)
    typedef uint32_t FileOpenFlags;
    void    OpenFile( const char* path, FileOpenFlags inFlags );

But that requires the foresight to never use a single boolean. And of course the actual discipline.

Wouldn't it be nice if C had a special provision for naming booleans? My first thought was allowing to specify enums in-line for parameters:

    void OpenFile( const char* path, enum { kReadOnly, kEditable } inReadOnly );

But to be convenient, this would require some rather too-clever scoping rules. It'd be easy to make the enum available to all callers when they directly call the function, but what about cases where you want to store the value in a variable? Maybe we could do C++-style scope resolution and allow saying OpenFile::kReadOnly ?

Would be a nice way to make it easy to name parameters, but not really readable.

I guess that's why other languages have named parameters instead. Avoids all those issues. So...

The boolean is dead! Long live the boolean! (as long as you have named parameters to label them with)