Programming

There are 82 entries in this Category.

Auto Layout: How to do percentage-based layouts

I recently had to implement a two-directional slider (I.e. a box with an indicator that can go anywhere). I wanted to do it using modern auto layout, and I needed it to resize properly on rotation without me having to change internal variables.

That meant that the position of the slider knob would have to be specified as a percentage (well, fraction) in the multiplier of the constraints, so that whatever size the slider UIView had, it would adapt.

My first attempt was to simply specify the position as a multiple of the height. So [self.indicatorView.centerXAnchor.constrainToAnchor: self.widthAnchor multiplier: 0.0].active = YES would be left, and multiplier: 1.0 would be right (and analogously for the Y direction).

That worked fine, but had the problem that the indicator could end up “tucked under” the left or right edges. I tried using UIView‘s layoutInsets, but that didn’t work either. In the end, I would have to manually add the fraction that my indicator’s width corresponded to to the ends to avoid that from happening. Might as well use pixels, then.

The autolayout margin guides

Then I remembered I could just add additional UILayoutGuides (MacOS has NSLayoutGuides) to define the margins I wanted to keep clear, then define another layout guide relative to those for the actual draggable area, relative to which I could constrain my indicator.

So first I built 4 guides that were pinned to the edges, had a width (resp. height) of 0.5 × indicatorView.widthAnchor (resp. heightAnchor) and a height/position (resp. width/position) same as the slider view.

Now we had the margins. Then I added a 5th guide that covered the draggable area inside those guides. Then took the old constraints and made them relative to this guide instead of the entire view.

That didn’t work. The height starts at 0, so if used as a position, it would always end up in the upper left. And if I added a constant the size of the margins, I’d have something that wouldn’t update when the view resized again. Might as well use pixels, then.

Drag area and indicator position layout guide (in blue)

Then it struck me: Why not just add another guide? The guide is pinned to the upper left of the draggable area, and its width/height are percentages of the draggable area’s height. I can now set the multiplier on the width/height constraints to my slider percentages, and the lower right corner of this 6th “indicator position” guide would be exactly where I want the indicator to be.

So I just change this guide’s multipliers when the indicator moves, and bind the indicator view’s center to the bottom and right anchors of the indicator position guide, and it all works!

Note

You may note that I keep talking about changing the multiplier on constraints. Yeah, that’s not really possible, the only thing on a constraint that can change is the constant (well, and the identifier, but that would ruin the joke).

So yeah, wherever you read that, what I do is remove and recreate the constraint. Sadly, constraints do not have a -removeFromSuperview method, so what I really have to do is walk from a constraint’s firstItem and secondItem property up to their common ancestor and tell it to remove the constraint (if they are views … if they are guides, that means they’re constraints on self or one of its superviews).

How HyperCard got its color back…

HyperCard 2.4 New Features stack

Many people have heard of HyperCard: An offline precursor to the modern web, the Mac equivalent to Visual Basic, the tool the first Wiki and Myst were made with. Sometimes it is also called the first widely-used implementation of Hyperlinks (though it didn’t really get easy-to-use text link support until the 2.x series, and even that was more complicated than the links we are used to today).

Many people have also heard that HyperCard never got proper color support. But wait ! Myst was a full-color game! It even played movies! One of these has to be wrong!

As always, it’s a matter of nuances. Of platforms. And of ingenious hacks.

Black and White

The Crow HyperCard stack

When HyperCard came out, Bill Atkinson intentionally chose a crisp, B/W bitmapped window size of 512×342 pixels. It was the best they had at the time, and it meant that everyone, even people with small Mac Plus/Classic screens, would be able to view it. Being able to trade stacks with your friends was an important part of the social experience of HyperCard.

Atkinson invented a clever lossless image compression scheme, known as “Wrath of Bill” or “WOBA” among HyperCard reverse-engineerers, that reduced these B/W pictures to a practical size through a variety of tricks, including run-length-encoding and XORing a row with previous rows, which meant that even a checkerboard pattern compressed into one line of dots XORed repeatedly with a black row.

In 1990, this format was re-engineered slightly so a card could be an arbitrary size, as long as its width was still a multiple of 16.

Native color support

HyperCard IIGS TuneMaker

In 1991 a separate team at Apple was tasked to port HyperCard 1.x back to Apple IIGS home computers. As they had a blank slate to start from, they integrated support for native color. You were able to choose from the Apple IIGS’s 16 color palette by specifying color numbers, and they also added a few features that would later make it back into the original, like radio button groups.

Macintosh HyperCard, then at 2.0, still had no color support. Worse, the syntax introduced for the Apple II and the color file format were not really suitable for Macs, as the IIGS had a very limited color set and pixels that were taller than wide.

Colorizing HyperCard

ColorizeHC's color mode selector

After HyperCard had made its way into the hands of Apple-subsidiary Claris, engineers Eric Carlson and Anup Murarka came up with an ingenious hack to bring color to HyperCard without having to change the file format or even touch the engine code:

HyperCard used double-buffered display. That is, whenever it had to redraw part of the window, it first painted all the individual parts onto each other in a hidden “buffer”, and then copied that buffer to the window on screen. This meant you never got the “stacking” effect seen in so many other drawing programs.

This last copying step was achieved using a system call named CopyBits. If you replaced this system call with your own routine from a HyperCard plugin, you could check whether the destination to be copied to was the current document’s window, and if it was, mix other drawings in.

Since HyperCard used black outlines and white, they used a “darken” drawing mode to draw their color on top of the B/W picture. Black would stay black, as it is the darkest color, while white areas would “show through” the color overlay. Of course this third merging step wasn’t very fast on the hardware of the time, but it worked fairly well.

Colorizing HyperCard startup screen

When it was decided not to release this hack due to concerns about the support load it would generate, the two engineers quickly replaced the Claris logo with a “BungDabba Productions” logo and got permission to release it as a free third-party extension.

AddColor and InColor

AddColor New Features Stack

For a while, Heizer Software, a large distributor of popular HyperCard plugins, created its own color overlay plugin named InColor with features like color transition effects (this is what Myst was made with). Eventually, HyperCard proper followed suit. First shipped as a pre-release for use with 2.2, the HyperCard Color Tools stack was included in 2.3.

The included AddColor XCMD used the same approach as ColorizeHC, but instead of having to script all the pictures and drawings you wanted on a card, it wrote the list of items and their colors into ‘HCcd’ and ‘HCbg’ resources with the same ID numbers as your cards and backgrounds, and provided InColor-style transition effects.

The stack also implemented an editor interface that provided a color and tools palette that allowed to select buttons and fields by simply clicking them and then a color, and even drew its own “marching ants” selection on top of the selected items, in a fashion not dissimilar to how the editors in other HyperCard clones like SuperCard worked, although a separate XFCN named “clicker” had to be used to intercept mouse clicks on the card and draw the marching ants.

AddColor 2.0 PICT paint editor

You were able to set a “depth” for the edges of an object, and apart from buttons and fields could also add colored rectangles, PICT files or resources to a card’s color overlay. With AddColor 2.0 (included with HyperCard 2.4) you even got a PICT editor window with color paint tools.

Animation in HyperCard

HyperCard’s animation support had usually restricted itself to changing 32×32 pixel icons (and later arbitrarily-sized PICTs using the “icon ID -1 plus button name” trick), or flipping through cards. Given how slow the color overlay performed on most Macs of the time, these weren’t really an option for fluid animation in color.

So HyperCard 2.2 bundled ADDmotion II. Not unlike the Color Tools, this product from MotionWorks created its own editor on top of HyperCard, providing you with a Macromind-Director-style timeline interface and pixel graphic editor. The animations generated were completely separate from HyperCard. They were saved to the stack and then you could use an XCMD to play one inside the card window, covering the card, and then returning you to HyperCard again.

So No Released HyperCard for MacOS ever had Color?

HyperCard New Features Stack Button Tasks Page

Nope. Basically, every HyperCard version from 2.1 on added a few new commands here and there, but it was the same HyperCard 2.x.

An exception could be made for HyperCard 2.2, which added a few new button types (popup buttons, more native-looking “standard” and “default” buttons) and other visible features to the core engine, and support for other OSA scripting languages like AppleScript instead of HyperTalk. But color? Nope.

Screen shots courtesy of @HyperCard, used with permission.

Myopic version-control islands

VersionControlIslands

Being a programmer, I use version control software a lot. A while ago, there was a great upsurge in such software. I suppose it started with Versions and Cornerstone, then continued with Git clients like Tower, Github and SourceTree.

Yet none of them really innovated on their command-line brethren. This may seem like an odd desire, but there are areas where GUI clients can improve on the command-line clients backing them.

Support the user’s workflow

In one talk at NSConference, Aral Balkan once said that “your UI shouldn’t look as if your database had just thrown up all over it”. This is what I’m reminded of when I look at SourceTree.

It feels like someone took a window and just threw in a pushbutton for every action, a text field for the commit message and a checkbox for every option. It presents me all of Git at once. It overwhelms not only me, but also my screen space, as it usually shows much more on the screen than I need at any single time, but since all of it has to be visible, it is all too small to be comfortably used.

All version control software needs to become more aware of context, of “what is it time for now”. Give the user a screen display that only shows things relevant to the current operation.

The File List

The file list is not just useful for when you want to commit a change. It can help with code navigation: I’m in a big project, I’ve edited a few files, I’ve viewed many more. I need to get back to that spot I started my change in after implementing some needed subroutines and their tests. The recents list in Xcode won’t help me there, too many files I came past on my search for the right spot, some in the main tab, some in multi-file search. But my VCS knows which files I just touched.

I just go into the VCS GUI client, to the list of changed files, and there are the 5 out of 50 files I actually changed. And now that I see these 5 filenames, I can recognize what the colleague named that file. I’ve quickly found it.

Why don’t more VCS GUIs support code navigation? Let me search. Let me select. Heck, if you wanted to get really fancy you could show me the groups in the Xcode project that my files belong to. Analyze, correlate.

Peripheral Vision

The one thing all GUIs for version control systems provide these days is what I’d call “peripheral vision”: They show a constant list of files in your repository and show which ones have changed, live.

You don’t have to actively call git status. Whenever a file changes, it shows up.

By having these updates show up on their own accord, I can be warned of external influences automatically. SmartSVN, for example, shows both the local and remote state of a file. So if a colleague modifies the Xcode project file on the server that I’m currently editing locally, I immediately see in my peripheral vision that I have a pending conflict.

Each Version Control System an Island

Most of the version control GUIs I’ve mentioned ignore one important fact of most peoples’ work with version control: Sure, it is useful for single developers as unlimited undo, but most of the time it is used in collaborative environments.

If I’m collaborating with someone, isn’t the most important thing here to keep me abreast of what other developers are doing? Why do all the GUIs except SmartSVN with its horrible non-native Java grab-bag UI focus so much on making me see my working copy that is right here in front of me, and then come up surprised when something on the server changes and drop me into an external diff client without any hand-holding?

Apart from showing remote status, why don’t they keep me informed of incoming changes? Why does Cornerstone only let me view the log history of individual files or folders, but doesn’t constantly keep the list of commits in my peripheral vision? Why does no client offer to show me a notification whenever a new push happens on the server?

They just don’t Learn from History

The commit history also seems to be an afterthought to most VCS GUI developers. The only human-curated part of the entire commit metadata is usually hidden on separate tabs, or at best fighting for space with the file list and lots of other UI. File names are short. Commit messages are long. Why should those two lists be forced to be the same width?

In Versions, the commit list can only be read. I can see the changes in it and the message, but can’t select a commit in the list to roll back to that commit, or branch off from it. This is one of the basic tenets of UI design: Don’t have the user type in something the program already knows. The commit hash is right there in front of me on the screen, why do I have to type it in to check out?

Moreover, the list of commits in Versions is not scannable. There are barely noticeable color differences in the date, name and commit message, and they’re too close together and separated by lines.

Ever wonder why Finder uses alternating background colors to distinguish table rows? Because it’s easier to scan: Lines are read by the mind as glyphs, additional information to be processed, whereas the “line” where two different-colored surfaces meet are just accepted as a gap between things.

That’s why so many lists use columns. That way, if you’re looking for a commit from a particular colleague, you just scan down that column, able to completely ignore the commit messages.

The User doesn’t make Mistakes

Users don’t make mistakes. Bad GUI just leads them down the wrong path. When a user makes a mistake, be forgiving.

A contradiction? Yes. While most VCSes already under the hood have the policy of never losing data, GUIs can improve on that. Undo on text fields. Showing a big warning banner across the window when the user is on a detached head, which the user can see even if the window is half-hidden behind Xcode. Offering to stash changes for the user if they’re switching branches and have unsaved changes.

If the user selects three “unknown” (aka new) files and asks you to commit them, don’t just abort with Git’s standard error saying that they aren’t under version control! Try to anticipate what the user wanted. Show a window with a list of the offending files and offer to automatically stage them (with checkboxes next to them to turn off ones they might not have wanted to commit).

If a user tries to commit a binary file that has its executable bit set, maybe ask for confirmation in case they’re accidentally checking in the build products, and offer to add the file or one of its enclosing folders to the .gitignore file.

If the user tries to amend a commit, be smart and warn them from changing history that’s already been pushed. But don’t warn them needlessly. Can you check if any remote is ahead of this commit to detect whether the user has already pushed the commit to be rewritten? If not, it’s safe, just let them do it.

Remote Possibility of Supporting a Workflow

I’ve mentioned how we need to try to support the user’s workflow more and how the server is under-served. This also applies to setup. One of SourceTree’s standout features is that it lets you not only enter your Github or Bitbucket URL, but also shows you lists of your remote repositories.

You can set a default folder where your programming stuff goes, and then just select one of your remote repositories and click “clone”, and poof, it checks it out, adds a bookmark for it, and opens it in a window and you’re good to go. Heck, Git Tower even lets you specify the address of an image file in your repository to represent it in the list for quicker scanning.

Why has no VCS GUI added a Coda-style project list and automatically looks for project files and their application icons in a checkout to pre-populate the icon?

Re-open the repositories (yes, users may want to open several at once, deal with it!) the user had open when your app was quit. And for heaven’s sake, why are there VCS developers who don’t know how to make their application accept a folder via drag & drop on its application icon in Finder or the dock so I can quickly open a working copy that’s right there in front of me without having to wait for an open panel to open up?

Promise to be Better

I’m sorry, this has turned into a rant there. But the fact is, there are so many VCS applications, yet most simply expose the commands of their command line equivalents. Why do so few protect me from commonly made mistakes and focus on what me and my colleagues want to achieve instead and support us in that?

How can products connected to servers be so asocial?

Raw graphics output in Linux: Part 2

DrawingOnLinux2

In Part 1 of this series, we’ve set up a command-line Linux in the VirtualBox emulator with support for direct frame buffer access, the git version control system and the clang compiler. Now let’s use this to draw graphics to the screen “by hand”.

Getting the code

The code we’ll be using is on my Github. So check it out, e.g. by doing:

mkdir ~/Programming
cd ~/Programming
git clone 'https://github.com/uliwitness/winner.git'

Now you’ll have a ‘winner’ folder in a ‘Programming’ folder inside your home folder. Let’s build and run the code:

cd winner
make
sudo ./winner

Screen Shot 2015-10-03 at 16.48.13

This code just drew a few shapes on the screen and then immediately quit. The Terminal was rather surprised about that, so just prints its last line on top of that.

How to access the screen

It took me a bit of googling, but eventually I found out that, to draw on the screen in Linux, you use the framebuffer. As most things in Linux, the frame buffer is a pseudo-file that you can just open and write to. This pseudo-file resides at /dev/fb0, and is the whole reason for the extra hoops we jumped through in Part 1 because a minimal Ubuntu doesn’t have this file.

So if you look at the file linux/framebuffer.hpp in our winner subversion repository, it simply opens that file and maps it into memory, using the ioctl() function and some selector constants defined in the system header linux/fb.h to find out how large our screen is and how the pixels are laid out.

This is necessary, as at this low level, a screen is simply a long chain of bytes. Third row chained after second row after first row. Each row consists of pixels, which consist of R, G, B and optionally alpha components.

By mapping it into memory, we can use the screen just like any other block of memory and don’t have to resort to seek() and write() to change pixels on the screen.

Esoterica

Since computers are sometimes faster when memory is aligned on certain multiples of numbers, and you also sometimes want to provide a frame buffer that is a subset of a bigger one (e.g. if a windowed operating system wanted to launch a framebuffer-based application and just trick it into thinking that the rectangle occupied by its window was the screen), the frame buffer includes a line length, x-offset and y-offset.

X and Y offset effectively shift all coordinates, so define the upper left corner of your screen inside the larger buffer. They’re usually 0 for our use case.

The line length is the number of bytes in one row of pixels, which may be larger than the number of pixels * number of bytes in one pixel, because it may include additional, unused “filler” bytes that the computer needs to more quickly access the memory (some computers access memory faster if it is e.g. on an even-numbered address).

Actually drawing into the frame buffer

The actual drawing code is in our image class, which doesn’t know about frame buffers. It just knows about a huge block of memory containing pixels, and its layout.

The main method in this class is set_pixel() which calculates a pointer to the first byte of a pixel at a given coordinate, and then, depending on the bit depth of the pixels in the bitmap, composes a 2-byte (16 bit) or 4-byte (32 bit) color value by filing out the given bits of our buffer.

All other drawing methods depend on this one:

Drawing rectangles

If you look at fill_rect, it simply takes a starting point (upper left corner of the rectangle) and then fills rows of pixels with that color.

To draw a frame around a rectangle is almost the same. We simply fill as many top and bottom rows as our line width dictates, and the rows in between get filled with a pixel (or whatever our line width is) at the left and right of our rectangle.

Drawing lines

Drawing one-pixel lines involves a tad of basic maths, but it’s nothing that you couldn’t get from a quick glance at Wikipedia. You take the line equation called the “point-slope-form”.

Then you calculate the line’s slope based on your start and end point. If the line is more horizontal than vertical, you loop over the X coordinate from start to end and use that and the slope to calculate the corresponding Y. If it is more vertical than horizontal, you loop over the Y coordinate to get the X instead.

Now, if you use this naïve approach, you may get small gaps in the line, because lines work with fractional numbers, while our computer screen only has full, integer pixels. This is why this example uses a variation on the same process that was invented by someone named “Bresenham”, which keeps track of the loss of precision and adds pixels in as needed.

Now drawing a line of more than one pixel width is a little harder. You see, lines are really infinitely thin, and don’t have a width. When you draw a line of a certain width, what computers usually do is either draw a rotated rectangle that is centered over the line and is as long as it is, and as wide as your line width, or it simply rubber-stamps a filled square or circle of the line width centered over each point on the line, which gives a similar look.

I essentially go with the latter approach in this example, but since I plan to eventually support different opacity for pixels, I do not want to draw whole boxes each time, because they would overlap and a 10% opaque line would end up 20% opaque in every spot where they overlap. So I just detect whether a line is mainly horizontal or vertical, then draw a horizontal or vertical 1 pixel line of the line width through each point.

This isn’t quite perfect and gives diagonal lines a slanted edge, and makes them a bit too wide, so I eventually plan to at least change the code so the small lines are drawn at a 90° angle to the actual line you’re drawing. But that’s not done yet.

Drawing circles

Again, I just get the equation for circles off Wikipedia. It says that r2 = (x-centerX)2+(y-centerY)2. Where “r” is the radius of the circle you want to draw, x and y are the coordinates of any point which you want to test whether it is on the circle, and centerX and centerY are the center of the circle.

Once you know that, you can draw a circle like you draw a rectangle. You calculate the enclosing rectangle of our circle (by subtracting/adding the radius from/to the center point) and then, instead of just drawing the rectangle, you insert each point into the circle equation. If the right-hand-side equates to r2 or less, the point is in the circle, and you can draw it, otherwise you skip this point.

Drawing the outline of a circle is just a specialized version of filling it here. Instead of checking whether the equation comes up as < r2, you also check whether it is greater than (r -lineWidth)2. So essentially you’re checking whether a point lies between two circles, the inner edge of your outline, and the outer edge of it.

This is probably not the optimal way to draw a circle, but it looks decent and is easy enough to understand. There are many tricks. For example, you could calculate only the upper right quarter of the circle, then flip the coordinate horizontally and vertically around the center and thus draw 4 points with every calculation. Bresenham even came with an algorithm where you only calculate 1/8th of a circle’s pixels.

Ovals

The library doesn’t do ovals yet, but I think they could be implemented by using the circle equation and multiplying the coordinate of the longer side of the surrounding rectangle by the ratio between width and height. That way, your coordinates are “projected onto a square”, in which you can use the circle equation.

There are probably more efficient ways to do this.

Drawing bitmaps and text

To draw a bitmap (or rather, a pixel map) is basically a special case of rect drawing again. You take a buffer that already contains the raw pixels (like letterA in our example main.cpp). For simplicity, the code currently assumes that all images that you want to draw to the screen use 32-bit pixels. That also allows us to have a transparency value in the last 8 bits.

It simply draws a rectangle that is the size of the image, but instead of calling set_pixel() with a fixed color, it reads the color from the corresponding pixel in the pixel buffer we are supposed to draw. It also only draws pixels that are 100% opaque.

Text drawing is now simply a special case of this. You create a bitmap for every letter, then when asked to draw a certain character, load the corresponding bitmap and draw that. Of course, serious text processing would be more complex than that, but that is the foundational process as far as a drawing engine is concerned.

You’d of course need a text layout engine on top of that to handle wrapping, and other code to e.g. combine decomposed characters. Also, if you wanted to support the full Unicode character set (or even just all Chinese glyphs), you’d probably want to make your look-up happen in a way that you don’t need to load all bitmaps immediately, but can rather lazy-load them as they are used.

Clipping

When we later implement our own window manager, we will need to be able to have windows overlap. To do that, we need to be able to designate areas as “covered” and have set_pixel() just not draw when asked to draw into those.

This is not yet implemented. The general approach is to have a bitmap (i.e. a pixel buffer whose pixels only occupy 1 bit, on or off) of the same size as our pixel buffer that indicates which pixels may be drawn into (usually that’s called a “mask”).

There are of course various optimizations you can apply to this. The original Macintosh’s QuickDraw engine used a compressed form of a bitmap called a “Region”, which simply contained entries for pixels in each line indicating the length of each color. I.e. “5 pixels off, 10 pixels full”. Some graphics engines simply only allow to clip to rectangles (which can be described by 4 coordinates). If all your windows are rectangular, that is sufficient.

The only clipping the image class currently implements is that circles that fall off any of the edges get clipped, and that rectangles and bitmaps that fall off the bottom or right edges get clipped. The way rectangles are currently specified, it is impossible to have them fall off the left or top, as that would require negative coordinates.

If you currently try to draw outside the image’s defined area using set_pixel(), you will corrupt memory. For a shipping drawing system you’d want to avoid this, and we’ll get to this once we implement a higher-level drawing system on top of this one that deals with clipping, coordinate systems and transformations.

Raw graphics output on Linux: Part 1

DrawingOnLinux1

In my quest to understand better how my computer works, I decided I want to write a very minimal window server. The first step in that is to create something that performs raw graphics output to the screen, directly to its back buffer.

So, as a test bed, I decided to grab the VirtualBox emulator and install Ubuntu Minimal on it. Ubuntu Minimal is a (comparatively) small Linux that is still easy to install, and will provide the graphics drivers we’ll be talking to, and a file system and a loader to load the code to run.

If you just want to know how drawing itself works, feel free to skip to Part 2 in this blog series.

Setting up the virtual machine

Setting up a VM is fairly self-explanatory with the setup assistant in VirtualBox. It has presets for Linux and even for various Ubuntus, and most of the time the defaults are fine for us:

Screen Shot 2015-10-03 at 01.15.15

Screen Shot 2015-10-03 at 01.15.44

Screen Shot 2015-10-03 at 01.15.51

Screen Shot 2015-10-03 at 01.16.06

Screen Shot 2015-10-03 at 01.16.19

I’m choosing to name the VM “Winner”, short for window server, but you can choose whatever name you like:

Screen Shot 2015-10-03 at 01.16.34

Now you have a nice emulated empty computer

Screen Shot 2015-10-03 at 01.16.50

Now, we need to tell it to pretend that the mini.iso Linux disk image file we downloaded from Ubuntu was a CD inserted in its optical drive by selecting the “Empty” entry under the CD, then clicking the little disc icon next to the popup on the right to select a file:

Screen Shot 2015-10-03 at 01.17.14

Note that you would have to use the “Choose Virtual Optical Disk File…” item, I have the mini.iso entry in here already because I previously selected the file.

Screen Shot 2015-10-03 at 01.17.28

Screen Shot 2015-10-03 at 01.17.40

Now you can close the window using the “OK” button and click the green “Start” arrow toolbar icon to boot the emulated computer.

Installing Ubuntu Minimal

Screen Shot 2015-10-03 at 01.18.35

Ubuntu will boot up. Choose “Command-Line install” and use the arrow and return keys to navigate through the set-up. Pick your language, country and keyboard layout (if you’re on a Mac, choose to tell it instead of having it detect, and pick the “Macintosh” variant they offer):

Screen Shot 2015-10-03 at 01.18.49

It will then churn a bit:

Screen Shot 2015-10-03 at 01.21.03

And then it will ask you to name your computer:

Screen Shot 2015-10-03 at 01.21.24

You can pick pretty much any name for your emulated home computer, it doesn’t really matter for what we are doing. I picked “winner”.

Then it will ask you to choose the country you are currently in, so it can pick the closest server for downloading additional components:

Screen Shot 2015-10-03 at 01.21.35

And if they have several servers in your country, they’ll offer a choice. Just pick whatever it offers you, it’ll be fine.

Screen Shot 2015-10-03 at 01.21.58

Then it will ask you if you need to use a proxy. Unless you’re in a weird restrictive company or university network or trying to get around an oppressive government’s firewall, you can just leave the field empty and press return here to indicate no proxy is needed:

Screen Shot 2015-10-03 at 01.22.18

Then it will churn some more, downloading stuff off the internet etc.:

Screen Shot 2015-10-03 at 01.22.42

Now it’s time to set up your user account, password (twice) etc.:

Screen Shot 2015-10-03 at 01.23.39

Screen Shot 2015-10-03 at 01.23.45

In this emulator, we don’t need an encrypted hard disk (If you need it, your computer’s hard disk is probably already encrypted, and your emulated computer’s files are all stored on that anyway).

Screen Shot 2015-10-03 at 01.24.40

Then it will ask you about some system clock settings (the defaults should all be fine here:

Screen Shot 2015-10-03 at 01.25.06

Then it will ask how to partition and format the hard disk. You’re not dual-booting anything, the emulated computer is for Linux only, so just let it use the entire disk:

Screen Shot 2015-10-03 at 01.25.31

And don’t worry about selecting the wrong disk, it will only offer the emulated hard disk we created. Tell it to create whatever partitions it thinks are right:

Screen Shot 2015-10-03 at 01.26.02

And it will churn and download some more:

Screen Shot 2015-10-03 at 01.26.11

Since we may want to keep using this for a while, let’s play it safe and tell it to apply any important updates automatically:

Screen Shot 2015-10-03 at 01.36.03

And when it asks if it is OK to install the boot loader in the MBR, just say yes:

Screen Shot 2015-10-03 at 01.38.22

Again, there is no other operating system inside this emulation, they’re just being overly cautious because so many linux users have weird setups.

For the same reason, you can just let it run the emulator with a UTC system clock as it suggests:

Screen Shot 2015-10-03 at 01.38.38

That’s pretty much all. Tell it to restart, and quickly eject the CD disk image by un-checking it from your “Devices” menu:

Screen Shot 2015-10-03 at 01.38.39

Setting up Ubuntu

Ubuntu is pretty much ready to go. You’ll have a neat command line OS. However, for our purposes, we want to have graphics card drivers. Since this is the minimal Ubuntu, a lot is turned off, so let’s turn that back on again and install some missing parts that we want for our experiments. Log in with your username and password and edit the configuration file /etc/default/grub which tells the bootloader what to do:

Screen Shot 2015-10-03 at 12.22.58

If you’re unfamiliar with the Unix Terminal, just type sudo nano /etc/default/grub and enter your password once it asks. sudo means pretend you’re the computer’s administrator (as we’re changing basic system settings, that’s why it wants your password). nano is a small but fairly easy to use text editor. It shows you all the commands you can use at the bottom in little white boxes, with the keyboard shortcuts used to trigger them right in them (“^” stands for the control key there):

Screen Shot 2015-10-03 at 12.23.33

Most of the lines in this file are deactivated (commented out) using the “#” character. Remove the one in front of GRUB_GFXMODE to tell it we want it to use a graphical display of that size, not the usual text mode that we’re currently using.

Save and close the file (WriteOut and Exit, i.e. Ctrl+O, Ctrl+X in nano).

Now usually this would be enough, but Ubuntu Minimal is missing a few components. So now type sudo apt-get install v86d. This tells Ubuntu to install the v86d package that does … something. If you left out this step, you would get an error message telling you that v86d doesn’t work on the next step. Confirm that you want to install these whopping 370kb of code by pressing “y” when asked. It will churn a bit.

Type in sudo modprobe uvesafb. The graphics drivers on Linux all implement the so-called “framebuffer” commands. That’s what “fb” here stands for. VirtualBox emulates a “VESA” display, and “uvesafb” is the modern version of the “vesafb” graphics driver you’d want for that. So we’re telling our Kernel to load that module now.

If all works, all that you should see is that your screen resizes to 640×480, i.e. becomes more square-ish:

Screen Shot 2015-10-03 at 12.25.54

Now we don’t want to manually have to activate the frame buffer every time, so let’s add it to the list of modules the Kernel loads automatically at startup. Type sudo nano /etc/initramfs-tools/modules to edit the module list and add “uvesafb” to the end of the list (in my case, that list is empty):

Screen Shot 2015-10-03 at 14.51.45

The professionals also suggest that you check the file /etc/modprobe.d/blacklist-framebuffer.conf to make sure it doesn’t list “uvesafb” as one of the modules not to load. If it does, just put a “#” in front of it to deactivate it.

Screen Shot 2015-10-03 at 12.51.22

Now run sudo update-initramfs -u which tells the system to re-generate some of the startup files that are affected by us adding a new module to the list. It will churn for a moment.

Now we need a nice compiler to compile our code with. There’s probably a copy of GCC already on here, but just for kicks, let’s use clang instead, which gives nicer error messages. Enter sudo apt-get install clang:

Screen Shot 2015-10-03 at 12.26.34

Finally, we need a way to get our source code on this machine, so let’s install the git version control system:

sudo apt-get install git

OK, now pretty much everything we need is set up. Part 2 in this series will get us to actually running some code against this graphics card driver.

You can shut down your virtual Linux box until you’re ready to try Part 2 by typing sudo poweroff.

Microsoft supports UIKit

iPhoneOnWindows

This week’s Build conference held a big surprise: Microsoft announced that they’ve built a UIKit compatibility layer for their various flavours of Windows.

Now I’m mainly a Mac developer and only hear of Windows things from friends and colleagues at the moment (the last time I did Windows work was around Windows XP), but my impression so far was that MS was frantically searching for a new API.

I don’t remember all occurrences, but I remember them announcing Silverlight, and .NET with WPF, and Windows RT that only supported the new APIs, and all sorts of things to then cancel them again.

So my impression as an outsider is that new APIs weren’t trustworthy and MS would always fall back to supporting their old API main-line that they carry around for compatibility reasons anyway.

Announcing UIKit and Android support actually makes a lot of sense in that context:

Although it appears to acknowledge that Windows Phone really didn’t take off, it does solve the catch-22 that MS found themselves in: Lack of apps. In an ideal case, they’ll now get all iOS apps Apple sells, plus the ones Apple rejected for silly reasons, plus those Android apps that iOS users long for.

If this gambit pays off, MS could leap-frog Apple *and* Android.

It also increases trust among developers who are sticking to ancient API: iOS and Android are the only modern APIs that Microsoft could implement that developers would confidently develop against after all these false starts, because even if MS dropped support for them, they’d still have the entire iOS/Android ecosystem to deploy against. So coding against UIKit for Windows Phone is a reasonably safe investment.

Swift

Of course, the elephant in the room here is Apple’s recent move to Swift. Now, given that Apple’s frameworks still all seem to be Objective-C internally (even WatchKit), I don’t think MS have missed the train. They might even pick up some Swift critics that are jumping Apple’s ship by supporting Objective-C.

But Swift damages the long-term beauty of MS’s “just call native Windows API from Objective-C” story. They will have to bridge their API to Swift (like Apple does with some of their C-based API right now), instead of getting people to use more and more classic Windows API in their Cocoa apps until the code won’t run on iOS anymore.

Still, that’s a small aesthetic niggle. MS already have a code-generator back-end that they can plug any parser onto, and Swift doesn’t appear to be a particularly difficult language to parse. In any event, parsers are easier than good code generation. For MS to create a Swift compiler is a solved problem, and I’d be surprised if they weren’t already working on it.

Of course, if MS had known about Swift when they started their UIKit for Windows, would they still have written it in Objective-C? Or would they have just written it in Swift with a bridging header?

So given the situation MS have managed to get themselves into, this sounds like it might be a viable solution to survive and, maybe, even come back from again. Still, it is an acknowledgement of how MS has fallen, that they need to implement a competitor’s API on their platform.

Death to Booleans!

DeathToBooleans

One of the most annoying aspects of most C-descended languages is that function calls become kind of unreadable when they have more than a single boolean parameter. The calls start looking like:

    OpenFile( "/etc/passwd", true, true, false );

and you have no idea what effect each boolean actually has. Sometimes people solve this by naming all parameters in the function name, but of course that doesn’t permit adding more optional parameters to a function later, because you’d have to change the name:

    OpenFilePathEditableSaveSavingAllowNetworkURLs( "/etc/passwd", true, true, false );

A disciplined programmer will solve this by adding an enum and using that instead of the booleans:

    enum FileEditability { kReadOnly, kEditable }
    enum FileSafeSaveability { kSafeSave, kOverwriteInPlace }
    enum FileAllowNetworkURLs { kFileURLsOnly, kAllowNetworkURLs };
    void    OpenFile( const char* path, enum FileEditability fe, enum FileSafeSaveability fs, enum FileAllowNetworkURLs fu );

Or maybe just make all booleans a “flags” bitfield:

    enum
    {
        kEditable = (1 << 0),
        kSafeSave = (1 << 1),
        kAllowNetworkURLs = (1 << 2)
    }
    typedef uint32_t FileOpenFlags;
    void    OpenFile( const char* path, FileOpenFlags inFlags );

But that requires the foresight to never use a single boolean. And of course the actual discipline.

Wouldn't it be nice if C had a special provision for naming booleans? My first thought was allowing to specify enums in-line for parameters:

    void OpenFile( const char* path, enum { kReadOnly, kEditable } inReadOnly );

But to be convenient, this would require some rather too-clever scoping rules. It'd be easy to make the enum available to all callers when they directly call the function, but what about cases where you want to store the value in a variable? Maybe we could do C++-style scope resolution and allow saying OpenFile::kReadOnly ?

Would be a nice way to make it easy to name parameters, but not really readable.

I guess that's why other languages have named parameters instead. Avoids all those issues. So...

The boolean is dead! Long live the boolean! (as long as you have named parameters to label them with)

Handling keypresses in Cocoa games

WASDKeys

At first blush, Keyboard event handling for games in Cocoa seems easy: You add -acceptsFirstResponder and -becomeFirstResponder overrides to your custom game map view, then override -moveUp:, -moveDown:, -moveLeft: and -moveRight: to handle the arrow keys.

However, if you play a game like that, you’ll notice one big difference to most other games: It only ever accepts one keypress at a time. So if you’re holding down the up arrow key to have your character run forward, then quickly press the right arrow key to sidestep and obstacle, your character will stop in its tracks, as if you had released the up arrow key.

This makes sense for text entry, where you might accidentally still be holding down one character while another finger presses the next, but for a game this is annoying. You want to be able to chord arbitrary key combinations together.

I found a clever solution for game keyboard handling on the CocoaDev Wiki, but it’s a bit old and incomplete, so I thought I’d provide an updated technique:

The solution is to keep track of which key is down yourself. Override -keyDown and -keyUp to keep track of which keys are being held down. I’m using a C++ unordered_set for that, but an Objective-C NSIndexSet would work just as well:

@interface ICGMapView : NSView
{
	std::unordered_set<unichar>	pressedKeys;
}

@end

and in the implementation:

-(void)	keyDown:(NSEvent *)theEvent
{
	NSString	*	pressedKeyString = theEvent.charactersIgnoringModifiers;
	unichar			pressedKey = (pressedKeyString.length > 0) ? [pressedKeyString characterAtIndex: 0] : 0;
	if( pressedKey )
		pressedKeys.insert( pressedKey );
}


-(void)	keyUp:(NSEvent *)theEvent
{
	NSString	*	pressedKeyString = theEvent.charactersIgnoringModifiers;
	unichar			pressedKey = (pressedKeyString.length > 0) ? [pressedKeyString characterAtIndex: 0] : 0;
	if( pressedKey )
	{
		auto foundKey = pressedKeys.find( pressedKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}
}

Of course, you’ll also want to react to modifier keys, and like most games, you will want to treat them not as modifiers in a shortcut, but as regular keys, so people can press Command to fire, or so. That’s basically the same, just that you override -flagsChanged: and that there are no standard character constants for the modifier keys. So let’s just define our own:

// We need key codes under which to save the modifiers in our "keys pressed"
//	table. We must pick characters that are unlikely to be on any real keyboard.
//	So we pick the Unicode glyphs that correspond to the symbols on these keys.
enum
{
	ICGShiftFunctionKey			= 0x21E7,	// -> NSShiftKeyMask
	ICGAlphaShiftFunctionKey	= 0x21EA,	// -> NSAlphaShiftKeyMask
	ICGAlternateFunctionKey		= 0x2325,	// -> NSAlternateKeyMask
	ICGControlFunctionKey		= 0x2303,	// -> NSControlKeyMask
	ICGCommandFunctionKey		= 0x2318	// -> NSCommandKeyMask
};

-(void)	flagsChanged: (NSEvent *)theEvent
{
	if( theEvent.modifierFlags & NSShiftKeyMask )
	{
		pressedKeys.insert( ICGShiftFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGShiftFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSAlphaShiftKeyMask )
	{
		pressedKeys.insert( ICGAlphaShiftFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGAlphaShiftFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSControlKeyMask )
	{
		pressedKeys.insert( ICGControlFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGControlFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSCommandKeyMask )
	{
		pressedKeys.insert( ICGCommandFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGCommandFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSAlternateKeyMask )
	{
		pressedKeys.insert( ICGAlternateFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGAlternateFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}
}

An alternative would be to just enlarge the numeric type used to store keys in your unordered_set. Instead of two-byte unichar values, you’d just pick uint32_t, and then define the constants as values that are out of range for an actual unichar, like 0xffff1234. If you’re using NSIndexSet, you’re lucky, it uses NSInteger, which is already larger.

Then add an NSTimer to your class that periodically checks whether there are any keys pressed, and if they are, reacts to them:

-(void) dispatchPressedKeys: (NSTimer*)sender
{
	BOOL	shiftKeyDown = pressedKeys.find(ICGShiftFunctionKey) != pressedKeys.end();
	for( unichar pressedKey : pressedKeys )
	{
		switch( pressedKey )
		{
			case 'w':
				[self moveUp: self fast: shiftKeyDown];
				break;
			...
		}
	}
}

Since your timer is polling at an interval here, and you can’t make that interval too fast because it’s the rate at which key repeats will be sent, it is theoretically possible that you would lose keypresses whose duration is shorter than your timer interval. To avoid that, you could store a struct in an array instead of just the keypress in a set. This struct would remember when the key was originally pressed down, and when the last key event was sent out.

That way, when the user begins holding down a key, you’d immediately trigger processing of this key once, and make note of when that happened. From then on, your -dispatchPressedKeys: method would check whether it’s been long enough since the last time it processed that particular key, and would send key repeats for each key that is due. As a bonus, when a key is released, you could also notify yourself of that.

You could even create “key event” objects of some sort to hand into your engine.

Adding Lua 5.2 to your application

LuaPlanetsInSpace

Note: The code in here is adapted from an actual project, however I’ve not yet had time to verify it doesn’t have typos. Google search results are just overflowing with info on old Lua versions that I wanted to dump this to the web now in the hopes of being at least vaguely helpful.

Lua is a really cool, clean little programming language that is easy to embed in your applications. Not only is it under a permissive license, it’s ANSI C.

However, recent updates have made most of the documentation about it on the web a bit outdated, so I thought I’d drop this quick tutorial on how to add Lua to your application and do some of the typical inter-operation things with it that you’d want to do when hosting scripts in your application.

Building Lua

Building Lua is very easy. After getting the source code (I’m using the unoffical Git repository from LuaDist on Github), you duplicate the file lua/src/luaconf.h.orig under the name lua/src/luaconf.h. Then you point Terminal at Lua’s folder and do

make macosx

(Or if you’re not on a Mac, use the appropriate platform name here, you can see available ones by just calling make without parameters in that folder)

This will churn a short moment, and then you’ll have a liblua.a file. Add that to your Xcode project (or equivalent) so it gets linked in, and make sure the header search paths include the lua/src/ folder. That’s it, now you can use Lua in your application.

If you’re using Xcode, I recommend you just add an aggregate target named “liblua.a” to your project and give it a shell script build phase like the following:

Liblua build phase for Xcode

Liblua build phase for Xcode

cd ${PROJECT_DIR}/lua/src/
if [ ! -f luaconf.h ]; then
    cp luaconf.h.orig luaconf.h
fi
make macosx

By specifying ${PROJECT_DIRECTORY}/lua/src/lua.h as the input file and ${PROJECT_DIRECTORY}/lua/src/liblua.a, Xcode will take care to not unnecessarily rebuild Lua if you make your application depend on this target.

Running a Lua script

To use Lua, you include the following headers:

#include "lua.h"
#include "lauxlib.h"
#include "lualib.h"

(If you’re using C++, be sure to wrap them in extern "C" or you’ll get link errors) Then you can simply compile the following code to initialize a Lua context and run a script from a text file:

lua_State *L = luaL_newstate();	// Create a context.
luaL_openlibs(L);	// Load Lua standard library.

// Load the file:
int s = luaL_loadfile( L, "/path/to/file.lua" );

if( s == 0 )
{
	// Run it, with 0 params, accepting an arbitrary number of return values.
	//	Last 0 is error handler Lua function's stack index, or 0 to ignore.
	s = lua_pcall(L, 0, LUA_MULTRET, 0);
}

// Was an error? Get error message off the stack and print it out:
if( s != 0 )
{
	printf("Error: %s\n", lua_tostring(L, -1) );
	lua_pop(L, 1); // Remove error message from stack.
}
	
lua_close(L);	// Dispose of the script context.

The script file would contain something like:

-- this is a comment
io.write("Hello world, from ",_VERSION,"!\n")

Calling from Lua into C

Now you can run a file full of commands. But how do you have it call back into your application? There’s a special call for that, lua_register, which creates a new function that actually wraps a special C function. You call it like this:

// Create a C-backed Lua function, myavg():
lua_register( L, "myavg", foo );	// Create a global named "myavg" and stash an unnamed function with C function "foo" as its implementation in it.

to register a C function named foo as a Lua function named myavg. The actual function would look like this:

// An example C function that we call from Lua:
static int foo (lua_State *L)
{
	int n = lua_gettop(L);    /* number of arguments */
	lua_Number sum = 0;
	int i;
	for (i = 1; i <= n; i++)
	{
		if (!lua_isnumber(L, i))
		{
			lua_pushstring(L, "incorrect argument");
			lua_error(L);
		}
		sum += lua_tonumber(L, i);
	}
	lua_pushnumber(L, sum/n);        /* first result */
	lua_pushnumber(L, sum);         /* second result */
	return 2;                   /* number of results */
}

This example function loops over all parameters that have been passed (using lua_isnumber to check they’re numbers, and lua_tonumber to actually retrieve them as ints), which may be a variable number, adds and averages them, and then pushes two return values on the stack (the average and the sum), and returns the number of return values it gave.

Functions in Lua and other oddities

You could now call it like:

io.write( "Average is: ", myavg(1,2,3,4,5) )

from Lua. The funny thing here is, in Lua, there are no functions in the traditional sense. It’s a prototype-based programming language, so all functions are closures/blocks/lambdas, and can be treated just like any value, like an integer or a string. To declare a function, lua_register simply creates a global variable named myavg and sticks such a function object in it.

When you declare a function in Lua, it’s also really just a shorthand for an assignment statement. So to run a function declared in a Lua file, like:

function main( magicNumber )
    io.write("Main was called with magicNumber ", magicNumber, "!")
end

you first have to execute it, which will create the global named main and stick a function in it. Only now do you look up the function object from that global and call it, again using lua_pcall like here:

lua_getglobal(L,"main");
if( lua_type(L, -1) == LUA_TNIL )
    return; // Function doesn't exist in script.
lua_pushinteger(L,5);
s = lua_pcall(L, 1, LUA_MULTRET, 0);	// Tell Lua to expect 1 param & run it.

The 2nd parameter to lua_pcall tells it how many parameters to expect.

Creating Lua objects from C

Objects are likewise just tables (i.e. key-value dictionaries) where ivars are just values, and methods are functions stored as values. So, to create a new object with methods implemented in C, you do:

// Create a C-backed Lua object:
lua_newtable( L );	// Create a new object & push it on the stack.
	
// Define mymath.durchschnitt() for averaging numbers:
lua_pushcfunction( L, foo );	// Create an (unnamed) function with C function "foo" as the implementation.
lua_setfield( L, -2, "durchschnitt" );	// Pop the function off the back of the stack and into the object (-2 == penultimate object on stack) using the key "durchschnitt" (i.e. method name).
lua_setglobal( L, "mymath" );	// Pop the object off the stack into a global named "mymath".

To call this, function, you do it analogous to before, just that you first use lua_getglobal( L, "mymath" ) to push the object on the stack, then lua_getfield to actually push the “durchschnitt” function stored under that key in the object.

Since functions are closures/blocks/lambdas, they can also capture variables (“upvalues”). To set those, you use lua_pushcclosure instead of lua_pushcfunction and pass the number of values you pushed on the stack to capture as the last parameter. E.g. if you wanted to pass along a pointer to an object in your program that the session object wraps, instead of stashing it in an ivar, you could capture it like:

// Define session.write() for sending a reply back to the client:
lua_pushlightuserdata( L, sessionPtr );	// Create a value wrapping a pointer to a C++ object (this would be dangerous if we let the script run longer than the object was around).
lua_pushcclosure( L, session_write, 1 );// Create an (unnamed) function with C function "session_write" as the implementation and one associated value (think "captured variable", our userdata on the back of the stack).
lua_setfield( L, -2, "write" );	// Pop the function value off the back of the stack and into the object (-2 == penultimate object on stack) using the key "write" (i.e. method name).
lua_setglobal( L, "session" );	// Pop the object off the stack into a global named "session".

and inside the session_write function, you’d retrieve it again like:

	session*	sessionPtr = (session*) lua_touserdata( L, lua_upvalueindex(1) );

Overriding Lua’s getters and setters with C

And finally, what if you wanted to have properties on this object that, when set, actually call into your C code? You install a metatable on your object, which contains a __newindex (setter) and __index (getter) function:

        // Set up our 'session' table:
        lua_newtable( luaState );   // Create object to hold session.
        lua_newtable( luaState );   // Create metatable of object to hold session.
        lua_pushlightuserdata( luaState, myUserData );
        lua_pushcclosure( luaState, get_variable_c_func, 1 );    // Wrap our C function in Lua.
        lua_setfield( luaState, -2, "__index" ); // Put the Lua-wrapped C function in the metatable as "__index".
        lua_pushlightuserdata( luaState, myUserData );
        lua_pushcclosure( luaState, set_variable_c_func, 1 );    // Wrap our C function in Lua.
        lua_setfield( luaState, -2, "__newindex" ); // Put the Lua-wrapped C function in the metatable as "__newindex".
        lua_setmetatable( luaState, -2 );   // Associate metatable with object holding me.

        lua_setglobal( luaState, "session" );    // Put the object holding session into a Lua global named "session".

Where, like before, myUserData is some pointer to whatever data you need to access to do your work in the getter/setter (like the actual C struct this Lua object stands for) and get_variable_c_func and set_variable_c_func are the C functions that you provide that get called to retrieve, and add/change instance variables of the session object.

Note that get_variable_c_func will receive 2 Lua parameters on the stack: The ‘session’ table itself, and the name of the instance variable you’re supposed to get. You return this value as the only return value. set_variable_c_func gets a third parameter, the value to assign to the variable, but obviously doesn’t return anything.

Dynamically providing your own globals

Sometimes you want to dynamically expose objects in your application to a script. One easy way to do that is to make them global variables. In our examples above, we did so by manually registering a new global with a table. But if you have lots of objects or they might change often, you don’t want to do that.

Luckily, Lua keeps all its globals in an invisible table named “_G”, which you can put on the stack using lua_pushglobaltable(L). Now, you can create a meta-table with an __index fallback function for that as well. The key your callback gets will be the name of the global, which you can use to look up the object and dynamically generate and push a table for this object.

Note: In the code above, we called lua_setglobal() in the end, which pushed our table off the stack and stuffed it in the “_G” table. Since we’re not doing that here, be sure to do a lua_pop( L, 1 ) to remove the globals table from the stack again. Otherwise, your call to lua_pcall() will try to call that table and give the error “attempt to call a table value”.

And now you know all you need to call Lua from C, and have Lua call your C functions back.

How Drawing on iOS Works

Someone on Stack Overflow recently asked about the various drawing APIs on iOS, and what the difference between using CALayers directly or using them indirectly through UIViews is, and how CoreGraphics (aka Quartz) fits into the equation. Here is the answer I gave:

The difference is that UIView and CALayer essentially deal in fixed images. These images are uploaded to the graphics card (if you know OpenGL, think of an image as a texture, and a UIView/CALayer as a polygon showing such a texture). Once an image is on the GPU, it can be drawn very quickly, and even several times, and (with a slight performance penalty) even with varying levels of alpha transparency on top of other images.

CoreGraphics (or Quartz) is an API for generating images. It takes a pixel buffer (again, think OpenGL texture) and changes individual pixels inside it. This all happens in RAM and on the CPU, and only once Quartz is done, does the image get “flushed” back to the GPU. This round-trip of getting an image from the GPU, changing it, then uploading the whole image (or at least a comparatively large chunk of it) back to the GPU is rather slow. Also, the actual drawing that Quartz does, while really fast for what you are doing, is way slower than what the GPU does.

That’s obvious, considering the GPU is mostly moving around unchanged pixels in big chunks. Quartz does random-access of pixels and shares the CPU with networking, audio etc. Also, if you have several elements that you draw using Quartz at the same time, you have to re-draw all of them when one changes, then upload the whole chunk, while if you change one image and then let UIViews or CALayers paste it onto your other images, you can get away with uploading much smaller amounts of data to the GPU.

When you don’t implement -drawRect:, most views can just be optimized away. They don’t contain any pixels, so can’t draw anything. Other views, like UIImageView, only draw a UIImage (which, again, is essentially a reference to a texture, which has probably already been loaded onto the GPU). So if you draw the same UIImage 5 times using a UIImageView, it is only uploaded to the GPU once, and then drawn to the display in 5 different locations, saving us time and CPU.

When you implement -drawRect:, this causes a new image to be created. You then draw into that on the CPU using Quartz. If you draw a UIImage in your drawRect, it likely downloads the image from the GPU, copies it into the image you’re drawing to, and once you’re done, uploads this second copy of the image back to the graphics card. So you’re using twice the GPU memory on the device.

So the fastest way to draw is usually to keep static content separated from changing content (in separate UIViews/UIView subclasses/CALayers). Load static content as a UIImage and draw it using a UIImageView and put content generated dynamically at runtime in a drawRect. If you have content that gets drawn repeatedly, but by itself doesn’t change (I.e. 3 icons that get shown in the same slot to indicate some status) use UIImageView as well.

One caveat: There is such a thing as having too many UIViews. Particularly transparent areas take a bigger toll on the GPU to draw, because they need to be mixed with other pixels behind them when displayed. This is why you can mark a UIView as “opaque”, to indicate to the GPU that it can just obliterate everything behind that image.

If you have content that is generated dynamically at runtime but stays the same for the duration of the application’s lifetime (e.g. a label containing the user name) it may actually make sense to just draw the whole thing once using Quartz, with the text, the button border etc., as part of the background. But that’s usually an optimization that’s not needed unless the Instruments app tells you differently.