Macintosh

There are 70 entries in this Category.

Cocoa and the Builder Pattern

There’s been a nice discussion about the Builder pattern on Twitter today. The Builder pattern is a nice tool to have, particularly because it addresses a few common problems.

What Builder Pattern?

In short, the Builder pattern is a pattern where you have one object that you configure that then creates another object based on that configuration. The nice thing here is that you can first build your object step by step, like you’d e.g. do with NSMutableString, but then the actual construction of the object happens in one go. Very handy for immutable objects.

Usually, a setter for a Builder object returns self, like retain or autorelease do. That way, you can create something in Java or C++ that almost looks like Objective C:

Image theImage = (new Image.Builder)->SetWidth(100)->SetHeight(80)->SetDepth(8)->Build();

Where the Build() method releases the builder and returns the actual, immutable Image object.

Extending init methods

When you add a parameter to an initializer in Objective-C, it is annoying. You usually add the parameter to the initializer, then create a compatibility version with the old method’s name that calls the newer version with a default value for the extra parameter.

Java and C++ have solved that problem by allowing you to specify default values for parameters, but they don’t maintain binary stability that way. If you add a parameter, you still have to recompile, but at least you don’t need to change your code.

I guess one fix would be if ObjC supported default arguments to a parameter that would simply result in the creation of a second version of this initializer with the label and parameter removed:

-(id) initWithBanana: (NSBanana*)theBanana curvature: (CGFloat)curvature = 5
{
    // magic happens here
}

Would be the same as writing:

-(id) initWithBanana: (NSBanana*)theBanana curvature: (CGFloat)curvature
{
    // magic happens here
}


-(id) initWithBanana: (NSBanana*)theBanana
{
    return [self initWithBanana: theBanana curvature: 5];
}

Of course, you’d still need at least one parameter, because ObjC has no way of knowing what part of the message is the name, and what is the label for the second (for init there could be special code, I guess, but what for a -exfoliateCow:withSpeed: method?). And defaulting to -initWithBanana if the first parameter has a default is obviously not always desirable either. It would solve the annoyance of telescoping constructors, at the least.

The Builder pattern doesn’t have this problem. Each parameter has a setter that you use to set it. A new builder could have defaults for all parameters when it is created. Then you change the ones you want to customize, and call -build on it to get the new object. If a new setter is added, that’s fine. You don’t call it, you get the default. The maintainers only add the one setter, no compatibility method needed.

Thread safety and immutable objects

The easiest way to get thread safety is to prohibit data from changing. If data is immutable, there is nothing to be synchronized between threads,and no need for one thread to wait for the other. However, immutable objects are also annoying, as they need to be fully specified in their init method.

A case where this is a problem in Cocoa is NSImage. NSImage is an immutable object by convention, but not actually. It is an object that has its own builder built in. You are expected to know that, for an NSImage to be thread safe, you are expected to create it, set its attributes, draw something in it, and then stop messing with it, treating it as an immutable, read-only object from then on.

The problem is, nobody enforces it. NSImage is a perfectly mutable object, with setters and getters. There is no exception thrown when you violate this verbal contract. Of course Apple could have added a “makeImmutable” method to NSImage that causes those exceptions to happen when you try to edit an instance. But then they’d have to add code to each setter that errors (Or at the least use some aspect-oriented-programming mechanism to inject code before every setter that performs this check automatically).

The Builder pattern would solve that: They can have a huge, private constructor on NSImage that changes with every release to add new parameters and initialize that immutable object, while the Builder would present a stable and convenient API to all clients. There would not be any setters on NSImage.

But it is ugly…

Admittedly, it feels a bit inelegant to build an object that builds an object. The way NSImage works is so much nicer. But Mike Lee actually offers a neat approach that works almost as well:

Just pass in a list of properties. This could be a dictionary of properties, or even just a variadic argument list like -dictionaryWithObjectsAndKeys: takes it. You’d define a constant for each possible property (that way if you mis-type the parameter name the compiler tells you, which you don’t get from a raw string). Internally, this constant could even hold the actual name of the property, even if it is never exposed as a method in the public header. So, all your constructor would do is call [self setValue: properties[key] forKey: key] in a loop, once for every element.

You get the same effect as labeled parameters (if you put the keys first, even more so). You also get the same effect as optional parameters. The binary ABI never changes, so that’s good, too. The only downside is you need to pass every parameter as an object, and you lose compile-time type checks. OTOH you gain compile-time errors when you try to change the object after creating it (because it declares no setters).

Is it worth all that work?

Admittedly, I haven’t had to add parameters to the init method of a public class that often. Nonetheless, I think Mike’s approach and the Builder pattern both are useful things to keep in mind if you ever come up with a class that can be created in numerous configurations (and is likely to gain new properties in the future) but should then be immutable. Class clusters and plug-in classes seem like a typical place where you might need this.

Are your rectangles blurry, pale and have rounded corners?

One common problem with drawing code in Cocoa (iOS and Mac OS X) is that people have trouble getting crisp, sharp lines. Often this problem ends up as a question like “How do I get a 1-pixel line from NSBezierPath” or “Why are my UIBezierPath lines fuzzy and transparent” or “Why are there little black dots at the corners of my NSRect”.

The problem here is that coordinates in Quartz are not pixels. They are actually “virtual” coordinates that form a grid. At 1x resolution (i.e. non-Retina), these coordinates, using a unit commonly referred to as “points” to distinguish them from act pixels on a screen (or on a printer!), lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled:

filled_rectangle_between_pixels

But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:

coordinates_between_pixels

So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in 50% grey. For the corner pixels, which are 1/4th black and 3/4ths black, you get lighter/darker shades accordingly:

line_drawing_between_pixels

This is where your washed-out drawings, half-transparent and too-wide lines come from. The fix is now obvious: Don’t draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:

coordinates_on_pixels

Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you’re e.g. clipping to the rectangle, this will cut off the lower right:

coordinates_on_pixels_cut_off

Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you’re drawing with transparency).

Note that this only holds true for 1x screens. 2x Retina screens exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels needed for a 1 point wide line:

coordinates_between_pixels_retina

However, you still have this problem if you want to draw a line that is even thinner (e.g. 0.5 points or 1 device pixel). Also, since Apple may in the future introduce other Retina screens where e.g. every pixel could be made up of 9 Retina pixels (3x), you should really not rely on fixed numbers. Instead, there are now API calls to convert rectangles to “backing aligned”, which do this for you, no matter whether you’re running 1x, 2x, or a fictitious 3x. Otherwise, you may be moving things off pixels that would have displayed just fine:

coordinates_on_and_between_pixels_future_retina

And that’s pretty much all there is to sharp drawing with Quartz.

The fast road to unit tests with Xcode

Supposedly Xcode has unit test support. I’ve never seen that work for more than two Xcode revisions. So I’ve come up with a minimal unit test scheme that works reliably.

1) Add a “command line tool” target (Foundation application, C++ application, whatever makes sense). Put your test code in its main.m or whatever. After each test, print out a line starting with “error: ” if the test failed. If you want to be able to see the successes as well, start them with “note: “. Keep a counter of failed tests (e.g. in a global). Use the number as the app’s return value of your main().

2) Add a “Run Shell Script” build phase to this target, at the very end. Set it to run ${TARGET_BUILD_DIR}/${PRODUCT_NAME}. Yes, that’s right, we make it build the unit test app, then immediately run it. Xcode will see the “error: ” and “note: ” lines and format them correctly, including making the build fail.

3) Optionally, if you want these tests to run with every build, make that command line tool target a dependency of your main app, so it runs before every build. Otherwise, just make sure your build tester regularly builds this test target.

4) Add a preprocessor switch to the tests that lets you change all “error:” lines into “warning:” instead. Otherwise, when a test fails, you won’t be able to run it in the debugger to see what’s actually going wrong.

Cocoa: String comparisons and the optimizer

Woman in front of a mirrorA while ago, a friend came to me with this bit of code:

NSString *a = @"X";
NSString *b = @"X";
if( a == b )
{
    NSLog(@"Same!");
}

“How come it works with the == operator? Didn’t you have to call isEqualToString: in the old days?”

Before we answer his question, let’s go into what he implicitly already knew:

Why wouldn’t == work on objects?

By default, C compares two pointers by simply comparing the addresses. That is logical, fast, and useful. However, it is also a little annoying with strings, arrays and other collections, because you may have two collections that still contain identical objects.

If you have the phone books from 2013 and 2014, do you just want to compare the numbers 2013 and 2014 and be told: “No that’s not the same phone book”, or are you actually interested in whether their contents are different? If nobody’s phone book entry changed in a particular city, wouldn’t you want to know that and save yourself the trip to the phone company to pick up a new phone book?

Since all Objective-C objects are pointers, the only way to do more than compare the addresses needs some special syntax. So NSString offers the isEqualToString: method, which, if the pointers do not match, goes on to check their contents. It compares each character to the same position in the second string to find out whether even though they’re not the same slip of paper, they at least have the same writing on it.

So why does the code above think they’re the same?

After all that, why does the code above think they are the same object after all? Doesn’t a point to the @"X" in the first line, b to the @"X" in the second line?

That is what is conceptually true, what a naïve compiler would do. However, most compilers these days are smart. Compilers know that a string constant can never change. And they see that the contents of both string objects pointed to by a and b are the same. So they just create one constant object to save memory, and make both point to the same object.

There is no difference for your program’s functionality. You get an immutable string containing “X”.

However, note that there is no guarantee that this will happen. Some compilers perform this optimization for identical strings in one file, but not across files. Others are smarter, and give you the same object even across source files in the same project. On some platforms, the string class keeps track of all strings it has created so far, and if you ask for one it already did, gives you that, to save RAM. In most, if you load a dynamic library (like a framework) it gets its own copy of each string, because the compiler can not know whether the surrounding application actually already has that string (it might be loaded into any arbitrary app.

Mutability is important

This is a neat trick compilers use that works only with immutable objects, like NSString or NSDictionary. It does not work with NSMutableString. Why? Because if it put the same mutable string into a and b, and you call appendString: on a, b would change as well. And of course we wouldn’t want to change our program’s behaviour that way.

For the same reason, NSString may be optimized so that copy is implemented like this:

-(id) copy
{
    return [self retain];
}

That’s right. It gives you the same object, just with the reference count bumped, because you can’t change this string once it has been created. From the outside it looks the same. Copy gives you the object with its retain count bumped, so you can release it safely once you’re done with it. It behaves just like a copy. The only hint you may have that this happened is that instead of an NSString with reference count owned solely by you, you get one with a reference count of 2 whose ownership you share with another object. But that’s what shared ownership is about after all.

Of course, this optimization doesn’t work with NSMutableString.

What I take away from this

So if someone walks up to you and shows you code that uses the == operator where it should really be checking for content equality, and argues that “it works, so it is correct”, now you’ll know why it just happens to work:

It’s a fluke, and if Apple decides to switch compilers or finds a better way to optimize performance or memory usage that requires them to no longer perform this optimization, they might just remove it, and this code will break, because it relied on a side effect. And we don’t want our code to break.

The universal catch-all singleton

Personified application delegate creating objects by cracking eggsOne bad habit I see time and time again is filling up your application delegate with junk that has no business being in there.

Avoid putting stuff in your app delegate.

What is the application delegate?

By definition, the application delegate is a controller. Most model classes are standard Apple classes. Those that aren’t are slightly smarter collections of these classes. Most view classes just display the model and forward a few IBActions to the controller in a dynamic way, so are inherently reusable as well (even if not always the particular arrangement of their instances).

The controllers, on the other hand, aren’t really reusable. They glue all this stuff together. They’re what makes your application your application, along maybe with a few bindings. So, again by definition, the application delegate is the least reusable part of an application. It’s the part that kicks everything off, creates all the other controllers and has them load the model and views.

Reusability is best!

The whole point behind OOP was to reduce bugs, speed up development, and help structure your code by keeping it grouped in reusable components. The best way to maintain this separation of components and permit re-using parts of it in different projects, is to keep the boundaries between components (e.g. objects) clean and distinct.

Objects have a clear hierarchy. The top creates objects lower down and gives it the information they need to operate correctly for your application. Nobody reaches up the hierarchy, except maybe to notify whoever created it of occurrences in the form of delegate messages. That way, the more application-specific your code gets, the fewer other objects know about it. The further down you get, the more reusable.

Moving operations or instance variables that are shared by several objects in your application into the application delegate, and having other objects reach straight up through NSApplication.sharedApplication.delegate to get at it, goes head-on against this desire, and turns your carefully separated code into an inseparable glob of molten sludge. Suddenly *everybody* includes the most application-specific header your application contains.

Don’t lie to yourself

The application delegate is one of the singletons every application contains. Its name is misleading and fuzzy. If you see it as a place to hold code “relevant to the application as a whole”, there is pretty much nothing that is off-topic for it. It is the universal, catch-all singleton.

So why not be honest to yourself: Whenever you add code to the application delegate, and you’re not just doing it to react to a delegate method from NSApplication and create a controller to perform the actual action in response, what you are really doing is create a singleton.

As we all know, singletons have a use, but having many singletons is a code smell. So avoid them if you can, but if you feel you can’t, be honest to yourself and actually make it a separate singleton (or find one whose purpose these operations fit).

Just say no to the application delegate as the universal singleton.

Update: If this hasn’t convinced you, here’s another blogger with a more CoreData-centric discussion of the issue, coming to the same conclusion: Don’t abuse the app delegate.

Mapping Strings to Selectors

MappingStringsToSelectorsSketchBack in the old days of Carbon, when you wanted to handle a button press, you set up a command ID on your button, which was a simple integer, and then implemented a central command-handling function on your window that received the command ID and used a big switch statement to dispatch it to the right action.

In Cocoa, thanks to message sending and target/action, we don’t have this issue anymore. Each button knows the message to send and the object to send it to, and just triggers the action directly. No gigantic switch statement.

However, we still have a similar issue in key-value observing: When you call addObserver:forKeyPath:options:context, all key-value-observing notifications go through the one bottleneck: observeValueForKeyPath:ofObject:change:context:. So, to detect which property was changed, you have to chain several if statements together and check whether the key path is the one you registered for (and check the ‘context’ parameter so you’re sure this is not just a KVO notification your superclass or subclass requested), and then dispatch it to a method that actually reacts to it.

It would be much nicer if Apple just called a method that already contained the name of the key-value-path, wouldn’t it? E.g. if the key-path you are observing is passwordField.text, why doesn’t it call observerValueOfPasswordField_TextOfObject:change:context:?

But there is a common Cocoa coding pattern that can help us with this: Mapping strings to selectors. The centerpiece of this method is the NSSelectorFromString function. So imagine you just implemented observeValueForKeyPath:ofObject:change:context: like this:

-(void) observeValueForKeyPath: (NSString*)keyPath ofObject: (id)observedObject change: (NSDictionary*)changeInfo context: (void*)context
{
    NSString *sanitizedKeyPath = [keyPath stringByReplacingOccurrencesOfString: @"." withString: @"_"];
    NSString *selName = [NSString stringWithFormat: @"observeValueOf%@OfObject:change:context:", sanitizedKeyPath];
    SEL      action = NSSelectorFromString(selName);
    if( [self respondsToSelector: action] )
    {
        NSInvocation * inv = [NSInvocation invocationWithMethodSignature: [self methodSignatureForSelector: action]];
        [inv setTarget: self]; // Argument 0
        [inv setSelector: action]; // Argument 1
        [inv setArgument: &observedObject atIndex: 2];
        [inv setArgument: &changeInfo atIndex: 3];
        [inv setArgument: &context atIndex: 4];
        [inv invoke]
    }
    else
        [super observeValueForKeyPath: keyPath ofObject: observedObject change: changeInfo context: context];
}

We build a string that includes the name of the key-path, turn it into an actual selector, and then use -performSelector:withObject:, or in more complex cases like this one NSInvocation, to actually call it on ourselves.

For cases that have no clear mapping like this, you can always maintain an NSMutableDictionary where the key is whatever string your input is and the value the selector name for your output, and then use that to translate between the two. When you make whatever call equivalent to addObserver: you have in that case, it would add an entry to the dictionary. That’s probably how NSNotificationCenter does it internally.

Update:
As Peter Hosey pointed out, another good use case for this pattern is -validateMenuItem: where one could turn the menu item’s action into a string and concatenate that with ‘validate’.

WWDC 2013 predictions

MacMiniFirstGenSketchJust like last year, I thought I’d note down my thoughts about WWDC, make some predictions, and then after WWDC go back here and compare it to what actually got announced.

  • I think it’s Mac Pro time. As in, not necessarily a new Mac Pro, but rather a successor. My guess is a beefed-up Mac mini (maybe a little larger). Several separate Thunderbolt lanes to make up for the missing room for PCI cards.
  • We will see the next iOS. No idea what will be in it.
  • Rumors around the ‘net say Apple pulled people off MacOS to finish iOS, so while I think we’ll get a MacOS announcement, we might not get a new MacOS right away. If we do, it’ll probably be a simpler update. A few iOS features, a few incremental improvements. Hopefully we’ll finally see a coherent story emerge instead of the weird Launch Pad/Finder dichotomy.
  • I think there will be other Mac updates. All Macs are pretty much overdue for an update these days. But I’m not sure we’ll see updates for all of them at WWDC.
  • I wish for a MacBook Air 11″ Retina. Something fast, portable with room for more RAM. I think it’s difficult – Retina displays of an MBA size are pretty much like an iPad display, but a Mac needs a more powerful GPU than an iPad because it runs several apps at once and can attach large external displays – but I think we’re close enough to get these now. The main issue as I understand it is power consumption. The MBA 11 doesn’t provide much room for a battery to begin with, and more pixels and a suitable GPU need more power as well. But Intel’s new CPUs supposedly use less power, so maybe that provides the savings needed to have power to squander on a Retina display and GPU.

That’s all I have. Feel free to leave your ideas and thoughts in the comments.

Why Cocoa programmers can’t have nice things

IMG_0315
Amy Worrall pointed me at a nice post on the technical feasibility of using exceptions for more than programming errors in Cocoa by Hari Karam Singh. Even though he is misled by some Mac documentation into thinking iOS didn’t have zero-cost exceptions and then disproves that documentation by disassembly, he draws lots of valid conclusions.

However, the problem is not one of technology. The problem is one of the sheer size of Apple’s Cocoa codebase, which would have to be updated to survive having an exception thrown through it. Apple would have to add @trys in every location where they call a SEL, after all, since they don’t know which of them may be user-provided and throw.

Since they’re not doing that, a user who decides to use exceptions anyway would have to add @trys to every method that might ever be called by the framework. That means you can’t catch exceptions thrown by that method when you call it, though, because it swallows them itself. So if you want to handle errors from that method, you either split it up into okButtonClickedThrows: and okButtonClicked:, duplicating every method and working in parallel with two error handling schemes, or you give up, like Apple, and just use one non-exception error handling scheme.

I love exceptions, but I don’t think my Cocoa code will be cleaner and error handling nicer if I put a try block at the top of every action and delegate method. NSError is less dangerous, because if an object returns nil and gives an error (and you don’t look at the returned error), the method call will simply collapse (call to NIL is a no-op) so nothing much will happen. Since I can’t put up an error dialog from drawing code or table delegate callbacks like numberOfSections, there’s not much difference there. The code is actually cleaner, because with NSError and nil returns I can just ignore errors, while with exceptions in an exception-unsafe Cocoa, I must catch here or I’ll risk throwing through Cocoa.

C++ also has an advantage when working with exceptions over Objective-C because it uses “Resource Acquisition Is Initialization” (or RAII for short). Locks, allocations, even changes of a global boolean can be implemented as stack objects using RAII to set themselves up when created and clean up behind themselves in their destructor. You don’t even have a ‘finally’ block in the language. OTOH, every method you write in an exception-handling ObjC would need an @finally block, even if it doesn’t care about the errors, just to clean up behind itself.

ARC, @autoreleasepool and @synchronized can help a little with clean-up of memory and locks these days, as they’ll get triggered on an exception anyway. But as Cocoa and Apple’s frameworks currently stand, using exceptions effectively doubles your work.

The same applies to existing code. Nobody wants to have to completely rewrite their apps for 10.9 just to adopt a new error handling scheme when their code already has working error handling with NSError. Apple understands that their developers want a certain degree of backward compatibility. That’s the reason why only iOS got the new runtime on 32-bit: There was no code that relied on the old runtime there, it was a new platform. But all existing Mac applications would have been broken if the system had suddenly no longer guaranteed instance variable layouts and method lookup-tables. However, since 64-bit required changes to pointer sizes and data structures anyway, nobody complained when Apple introduced the new runtime for 64 bit on the Mac. They had to re-test and update their applications anyway.

All that said, I would love a new Objective-C framework that uses exceptions and is exception-safe for new projects to be built against. It just doesn’t seem like something Apple can retrofit at a whim.

At best, they can slowly make each framework exception-safe, and then in every spot where there can be an error, instead of returning it, look at an “app supports exception handling”-flag and throw their errors if that is set. That way, existing applications will keep working, while new applications can be written exception-safe. And once the majority of applications have moved to exceptions, Apple can switch to using exceptions themselves (see above — you don’t want 2 versions, exception-safe and unsafe, of every method), and tell the stragglers to please make their code exception-safe.

Setting up Jenkins for Github and Xcode, with Nightlies

IMG_0320

Jenkins? What? Why?

When you work alone on a several projects that share code, it’s easy to unnoticeably break the build of one project with a change for the other, or introduce some specific dependency on a quirk of your main work Mac, or lose data by referencing a file outside the repository instead of copying it in. Since that’s annoying, I decided to set up Jenkins, a continuous integration system, on my Mac mini that serves as my EyeTV DVR, media centre and home server.

It’s not that hard, but some of the details are a bit fiddly and under-documented, so I thought I’d write down how I made it work before I forget it (and for when I next have to set it up again). My source code is in a Github repository, and while I was at it, I wanted to set it up that one of my open source projects gets nightly builds FTPed onto its web site (but only when I’ve actually changed something).

Initial Install

Jenkins is a Java application. Since Java no longer comes pre-installed on Mac OS X, if you’re not using any other Java applications, you should open a Terminal and type in java, which will make Mac OS X notice Java is not yet installed and download it. Also make sure you’ve installed Xcode on your Mac. The version from the app store is fine, but make sure you install the command line tools under Preferences > Downloads on the Components tab, so Jenkins will be able to find git.

Next, you’ll want to create a dedicated user account that Jenkins will run under. The standard installer does that for you, but it only creates a command-line account, which makes it very hard to set up all the certificates, so go to System Preferences‘s Users & Groups section, and create a new account and name it “Jenkins”. Make sure you enter “Jenkins” with a capital ‘J’ under “Account name”. Also, right-click the account in the list on the left and choose “Advanced Options…”. Replace the standard home directory “/Users/Jenkins” with “/Users/Shared/Jenkins”, which is what the standard installer will use. Update: Getting reports that this is broken in 10.9.2 (or maybe a new Jenkins installer released around that time). So I guess you’ll have to set up everything using the command line.

Now that all is ready, go to jenkins-ci.org, and right on the front page you’ll find a Mac OS X direct download link that gives you a nice Mac installer package. Run that. Jenkins will be installed so it automatically launches at system startup, and it will run on your Mac on Port 8080. So make a note to later forward that port through your router to your Mac so it is accessible from the outside (you will need a dynDNS domain-name connected to it, or a static IP, or Github won’t be able to notify you of changes). But not yet! First you have to secure Jenkins with a password.

Open your browser and point it at http://localhost:8080 (your Mac’s Bonjour name is fine as well, as will be your external domain name or IP, when you later set up the port forward). And after a short wait you’ll get to Jenkins’ front page. There’s a breadcrumb bar at the top which pops up a little menu if you mouse over the initial Jenkins breadcrumb:

Jenkins front page

Under Manage Jenkins, click Configure Global Security and there, check Enable Security, but do not save yet! If you do, Jenkins will happily lock you out. You haven’t created a user login yet, so you’ll never be able to get back in again without editing the config.xml in the Jenkins user folder and manually deleting the three security-related lines in there.

Next, we’ll have to set up permissions for the new user login, and then actually create it. So first tell Jenkins to use Jenkins’s own user database and Allow users to sign up under Security Realm. Then check Matrix-based security and type whatever user name you want into the little User/group to add: field and click Add. Then make sure that all checkboxes are checked for this user login, all the way to the right, and all are off for “Anonymous”.

Screen Shot 2013 04 06 at 00 20 36

Now that that’s done, you can save. Then click Sign up in the upper right and sign up under the user name you just gave all the permissions to. Yay! We have a valid user! Now go and turn off the Allow users to sign up checkbox again so nobody else makes themselves an account on your server.

Git support

By default, Jenkins only does SVN. But it has a nice big list of plug-ins that you can easily install. Go to the menu, Manage Jenkins > Manage Plugins and go on the Available tab. There’s a boatload of plugins there, but we only care about one for now: Github Plugin. Find it (there are a few with similar names) and install it. Check the box to restart after the installation.

If you’re curious about a plugin, just click its name. It will show a web site with documentation and setup instructions. Installing a plugin means that its checkboxes and text boxes show up in the Configure System section and each job’s Configure section. So let’s go to Configure System.

Take note of the Home directory mentioned at the top. This is where you’ll later be installing your Github certificates so Jenkins can check out code. Also, since you want your Jenkins externally accessible, scroll down to Jenkins Location and enter your external URL or static IP as the Jenkins URL there.

Between those two is a Git category. Click the lone Git installations… button there. If you installed the Xcode command line tools as mentioned, you will not see a red error message here and it will have found git in the default location. Otherwise, set up the search paths to point wherever you have git installed.

Now go to Github Web Hook and check Let Jenkins auto-manage hook URLs and enter your username and password in the fields that show up. This is needed so Jenkins can install a script that notifies it whenever a new commit has been pushed, so it’ll start a build. Click Test Credential to make sure that works.

GithubWebHook

Setting up a job

In Jenkins, everything that is built periodically is represented as a Job. To create a new job, click New Job in the upper left on the Jenkins home page. In the page that follows, choose a name (but be sure not to use spaces, as this name will be used for the folder in which Jenkins will work, and you’ll have much less trouble with a shell-script-friendly name), and select “Build a free-style software project”.

Jenkins New Job Page

Click OK, and you’ll get to that job’s Configure page. Select Git under Source Code Management and enter the URL you see on Github under SSH for your repository twice, once in Github project at the top, and as the Repository URL under Source Code Management:

Creating a new Jenkins Job

And finally, check Build when a change is pushed to Github under Build Triggers:

Screen Shot 2013 04 06 at 13 44 07

Now you’ve set up Jenkins so it will try to check out your code whenever a change happens. Next, we will have to tell it how to actually build it. We do that in the Build section. Click Add build step and choose Execute shell. You’ll get a text field in which you can enter a shell script.

This shell script will be run in the folder into which your repository was checked out. This will be a folder named after your job in the workspace subfolder of your jenkins user’s home directory. So if your Xcode project file is at the root of the repository, you can just call xcodebuild there. If it’s in a subfolder, you can cd SubFolderName and then call xcodebuild.

Set up a Jenkins build

One problem with Xcode is that it builds into a hardly-predictable folder somewhere in Library. Jenkins requires all built files to be inside a job’s workspace folder, or it won’t let you archive them. So we need to override it to e.g. build into a build folder inside the checkout. To achieve this, we set the CONFIGURATION_BUILD_DIR environment variable when we call xcodebuild. Note the example screenshot hard-codes the path, while I now use one of Jenkins’ environment variables:

BUILD_DEST_PATH=${WORKSPACE}/build

The script above, once it is done, grabs the files from the build folder and compresses them into a single archive. This is so we can archive the built file somewhere, for future reference. The actual archiving is done under Post-build Actions where we select Archive the artifacts from the Add post-build action popup. Here again, the path is relative to your job’s workspace subfolder, so if you want to archive something added to the build folder, use a relative path like build/MySweetApp.tgz.

Note that Jenkins tries to be helpful and displays red warnings that it can’t find the file to archive right now. At this point, that’s OK. There is no checkout, and we’ve never archived anything. However, later, this can be very helpful in figuring out what’s gone wrong, if the checkout works, but building fails or so.

Done? Then save. You could also click Build Now on the left (or the little clock with the green “run” arrow on it anywhere next to your newly-created job on the home page), but it would fail. Apart from errors in your script, there would likely be two issues, which you can see in the menu that shows up when you hover the mouse over any failed build and click “Console Output” in the menu that shows up:

  1. Github will complain that you don’t have permission/are missing certificates
  2. Xcode will complain you haven’t accepted its license agreement.

Remember when you set up access to Github for the first time and you had to create and install certificates? You’ll have to do that for your Jenkins user, too. Or you could simply copy the hidden .ssh folder in your user directory over into the Jenkins user’s home folder. Note that this isn’t e.g. /Users/Shared/Jenkins/Home like in the standard installation, but actually one folder up, so you want your certificates in /Users/Shared/Jenkins/.ssh/.

Note you’ll have to

sudo cp -R ~/.ssh /Users/Shared/Jenkins/
sudo chown -R jenkins /Users/Shared/Jenkins/.ssh

to make the folder accessible to the Jenkins user. If you created a real GUI-user for jenkins, you can simply run Xcode once and accept the license agreement once. Alternately, as the error message from xcodebuild will tell you, you can do

sudo -u jenkins -i
xcodebuild -license
exit

To view the license. While you’re in the license screen, you can skip to the end by typing an uppercase G (you’ve already accepted the license agreement when installing the command line tools from inside Xcode, after all), then type in agree, as directed.

If you now click Build Now on Jenkins’s home page, it should work. The job should show up in the Build Queue, the ball at its left should flash for a while, and the similar ball next to the job in the list of jobs on the home page should be blue. If it is red, click the Job’s name, and in the Build History on the left mouse over it and choose Console Output to see the log and error messages.

If a build was successful, you can view the archived files (“artifacts”) by clicking a job on the home page, and then one of the individual build times listed at the bottom of its page.

Setting up nightly builds

If you want to make nightly builds and not just have them in Jenkins, but actually upload them to an FTP server somewhere, you will need the FTP Publisher Plugin. Install it, then go into Manage Jenkins > Configure System and scroll to the new FTP repository hosts section. You can create a preset for each server you have. Since Jenkins is publicly accessible, I recommend creating a separate user for Jenkins and restricting it to only a nightlies folder on the server, and no CGIs. That way, should Jenkins somehow be hacked, people at least can’t use the password and login to deface the rest of your server (even though they *can* replace the downloads).

FTP Server settings

Once you’ve added your server, create a new job (e.g. MySweetAppNightly, but this time check Copy existing Job and type in the name of your regular CI job as the template (e.g. MySweetAppCI). Then just add a new action to Post-build Actions, by choosing Publish artifacts to FTP from the Add post-build action popup. Select your server from the FTP site popup and write the relative path name of the TGZ archive you set up in Files to upload (in our example, that would have been build/MySweetApp.tgz). Leave the Destination-field empty, unless you want to upload into a subfolder of the folder you set up in System Configuration. If you are building into a subfolder (like in our example build/MySweetApp.tgz), you may also want to check Flatten files, or it will upload into a subfolder of the same name on the server.

Now, what this would do right now is upload every change to the FTP server. The server would be strained unnecessarily if your application contains large assets. We want a nightly build. How do we do this? We scroll up to Build Triggers, turn off Build when a change is pushed on Github, and check Poll SCM. Now, we could just pick Build periodically, which is almost identical, but the advantage of Poll SCM is that it won’t build and upload if nothing has changed since the last build. (It would also be the right option if you host somewhere else than Github and can’t use the plugin and don’t want to write your own post-commit hook.

Click the question mark next to the Schedule text field to see the syntax, it is pretty much like cron. I picked 4 AM at night, every day of the week. That’s a time where I’m usually not in the middle of a series of check-ins, so chances are this should build.

Jenkins Build Triggers for Nightlies

If you hardcoded the path in the Build section, be sure to rename the job in the path there, then save and click Build Now to generate and upload your first nightly build.

Getting notified

Of course, all of this would be pretty pointless if we had to check Jenkins after every commit. Luckily, Jenkins can e-mail you. You need an e-mail account somewhere (I recommend creating one dedicated to Jenkins, again for security reasons, but any old Hotmail account would work). Scroll to “E-Mail Notifications” in Configure System and click the Advanced button, check Use SMTP Authentication, then enter the same user name, password and SMTP server you would specify to use this account from Mail.app or another e-mail client. Check Use SSL if your mail hoster supports that.

Advanced E-Mail Notifications

Then check Test configuration by sending test e-mail and enter whatever e-mail addres you want to send a test e-mail to in Test e-mail recipient and click Test configuration. If you mis-configured something, you’ll get Java throwing up backtraces in red all over your window. Enjoy.

Now, all that’s left is adding a E-mail Notification Post-build action to each job. Happy continuous integration, and a happy new year.

I also installed the Twitter plugin and added that as a Post-build action to notify me whether the build is broken. It is pointed at a protected Twitter account that only I can follow. You have to run a little Java command-line app to get the API access tokens needed for Jenkins to talk to Twitter, and paste those into text fields on Jenkins’ System Configuration, but that’s as complicated as it gets.

Certificates and DeveloperID Builds

If you want to build your Mac executables for distribution outside the Mac app store, you will need to log into your Jenkins user and open Xcode, and there go into the Organizer. Click Refresh in the Provisioning Profiles section (click OK in any App Store certificates not existing error messages it may put up if you only want to go outside the MAS, or you work Mac-only and not iOS).

Next, click your team. If it says there are no private certificates across the top, go to a Mac that has all your certificates already set up for development, and in the Organizer choose Editor > Developer Profile > Export Developer Profile… to export your private keys as a password-protected .developerprofile file. Copy that file over to the Jenkins user and double-click it there, and Xcode will ask for the password and install your certificates.

Now, verify that everything works: Open the project you want to build for developer ID. Go into the project‘s build settings and set the Code Signing setting to your Developer ID Application certificate. Build the project, clicking Always allow if Xcode asks for permission to use your private key to sign the application. If the build fails with an error like “timestamps differ by 205 seconds — check your system clock Command /usr/bin/codesign failed with exit code 1”, you probably dawdled too long in the confirmation dialog. Just build again and it should work.

If codesign fails trying to sign a file that doesn’t exist, you probably have a target that doesn’t produce a file, like a target that runs a shell script to generate a header containing the build number. Since we’re overriding the code signing setting for the entire project, it will try to sign that nonexistent output file. One way to satisfy it is to add a line like touch ${CODESIGNING_FOLDER_PATH} to the end of your script, which will create an empty file for codesign to sign.

Now that we know you have code signing set up correctly, we simply modify the call to xcodebuild in the job you want to have signed for Developer ID:

security unlock-keychain -p 's3kr1tp4ssw0rd' ~/Library/Keychains/login.keychain
xcodebuild CONFIGURATION_BUILD_DIR=$BUILD_DEST_PATH  \
  CODE_SIGN_IDENTITY="Developer ID Application: Joe Shmoe" \
  -configuration Release \
  clean build

The first call to security unlock-keychain does just that: Give xcodebuild running under Jenkins access to the keychain containing the keys it needs for code signing. Here you need to specify the keychain’s password, which is not a very secure thing to do, but can’t really be avoided in this case. At some point, the server *will* need access to your keys to build.

For a tiny bit of extra peace of mind, you might want to change the password of your keychain to be different from the account’s login password using the Keychain Access application. That way, if someone somehow manages to see this script and the password to the keychain, they still can’t log into your build tester to actually use it.

Alternately, you could run the Jenkins server that is exposed to the outside and manages the job on a different Mac (or at least as a different user?) than the actual build server. Jenkins allows that, so you can e.g. have one Jenkins web interface to build Mac, Windows and Unix software.

The second call is the same as our previous xcodebuild call, with three parameters added:

  1. The first one does in script what we did manually for testing in the GUI: It overrides the “Code Signing:” build setting with the Developer ID certificate (Insert your name here).
  2. The second one makes sure we don’t build with whatever configuration was last set, but instead make a release build, with optimizations and such. You could also specify that for other cases, if you wanted, to make sure e.g. stuff you remove from release builds doesn’t break the debug builds or vice versa.
  3. The third makes sure we clean before we build. This makes sure you don’t get any leftover files from a previous build (e.g. script-generated header files), but is also slower than an incremental build. You would probably also want to do this for nightly builds, but probably not for a CI build that gets triggered a lot, unless your project is very small.

Including the job number

For my nightly builds, I wanted to have a monotonically increasing version number. To add this is fairly easy: Just pass a few more settings overrides to xcodebuild:

  GCC_PREPROCESSOR_DEFINITIONS="BUILD_NUM=${BUILD_NUMBER} BUILD_MEANS=nightly" \
  INFOPLIST_PREPROCESSOR_DEFINITIONS="BUILD_NUM=${BUILD_NUMBER} BUILD_MEANS=nightly" \

The first line includes the build number that Jenkins uses for this job as a #defined constant accessible to your source files. The second does the same for projects that you have set to preprocess their Info.plist file (e.g. to include the build number in the CFBundleVersionString). You can also define other constants, separated by spaces, like BUILD_MEANS in this example, to e.g. somewhere display that this is a nightly build. You can provide default values for manual builds in a header that you include in those source files that need them, or in a prefix header for your Info.plist:

#ifndef BUILD_NUM
#define BUILD_NUM     0
#endif
#ifndef BUILD_MEANS
#define BUILD_MEANS   manual
#endif

And this should cover everything you typically need to do on your new continuous integration Mac.

Universal Procedure Pointers

When Apple announced they’d be switching from PowerPC to Intel CPUs in 2005, many existing Mac developers were looking at the prospect more calmly than newer arrivals to the platform. After all, Apple had done something similar before, successfully, in the mid-nineties: The switch from the 68000 CPU to the PowerPC CPU.

One of the differences to 2005’s switch, however, was that Apple permitted mixing of PowerPC and 68000 code within the same application. To achieve that, a new “Mixed Mode Manager” was introduced, that took care of switching between executing raw PowerPC code and emulating 68000 CPU instructions. The linchpin of this manager were Universal Procedure Pointers, or UPPs, for short (sometimes also called Routine Descriptors).

Universal Procedure Pointers

A UPP was a simple data structure that described the calling conventions and location of a PowerPC function in RAM, and started with a 68000 instruction. This data structure could be handed to any system function where it expected a callback, and could be executed by 68000 code just like a function pointer.

The instruction at the start of the UPP simply contained a jump (think function call, or goto) to a function that recorded the address of the UPP and stopped/started the 68000 emulator.

This meant that Apple only had to port the very foundations of the operating system to PowerPC for the initial roll-out. Applications like the Finder, or window/control definition functions (WDEFs and CDEFs, little code modules that took care of drawing the frame around a window or custom views) could remain 68000 code, and could be ported selectively as needed.

This also meant that plug-ins written for a 68000 application could be loaded and launched by a PowerPC application. All it had to do was, instead of calling the plug-in’s main function directly (which would crash, because it contained 68000 instructions, not PowerPC instructions), it would call the CallUniversalProc function.

The CallUniversalProc function would look at the start of the function pointer given. PowerPC plug-ins effectively contained a UPP at the start. From the address of the function it jumped to, the Mixed Mode Manager could see that this was already PowerPC, and just jump over the UPP to where the PowerPC code lay, without having to load and run the 68000 emulator, which only got chosen if the function didn’t begin with a UPP.

Also, a 68000 application running in emulation on a PowerPC Mac was able to load a PowerPC plugin. It would simply try to execute the UPP, whose first bytes were the jump instruction telling the Mixed Mode Manager to switch back to PowerPC and run it directly.

Fat binaries – universal apps before universal apps

It was trivial to create an application where the same file ran both on old and new Macs: 68000 executables contained their code in ‘CODE’ resources in their resource fork. PowerPC applications had their code in the data fork of the application file, plus a ‘cfrg’ (“code fragment”) resource with some information about the code (e.g. an offset, so you could have other data in the data fork besides code, which especially games on 68000 liked to do). So a 68000 Mac would simply ignore the data fork and ‘cfrg’ resource, while an PowerMac would look for it, and only if it failed to find one start the emulator and run the 68000 code.

This meant that, in those days, compilers simply built a 68000 and a PowerPC version of the application, then copied the ‘CODE’ resources from the 68000 application into the PowerPC application. Presto! Fat binary for both architectures!

But that wasn’t all: It was also possible to create UPPs (and thus plug-ins) that were “fat”: They contained both PowerPC and 68000 code. Depending on what architecture you were running under, the Mixed Mode Manager would simply jump to the right offset in your plug-in resource, which contained both versions of your code.

Of course, all this mucking about with UPPs meant that you had to allocate/free memory for a UPP for each function you wanted to pass to a system API. And you had to keep that memory around as long as that system call needed it.

For plug-ins, this involved some additional management, as often plug-ins would be dynamically loaded and unloaded during the life of an application. For functions in your application, you usually just stashed the result of NewRoutineDescriptor in a global variable and never bothered calling DisposeRoutineDescriptor.

Why no UPPs for Intel?

So why didn’t Apple choose to do UPPs again for the Intel switch? Well, apart from political reasons (back then, many application vendors, not just Apple, dragged their feet porting their Mac applications to PowerPC, meaning Macs spent most of their cycles emulating old code instead of overtaking the competition), PowerPC and Intel differed in the way they stored numbers in memory.

The PowerPC CPU actually supported running both in big endian like the 68000 and little endian like Intel CPUs. This came in handy when switching from 68000, because the PowerPC CPU was simply told to run big endian, and both PowerPC and 68000 code now stored their data the same way.

But of course the Intel CPU didn’t have that switch. And since an emulator only knows about raw bytes, the PowerPC emulator (“Rosetta”) in Intel Macs could not transparently convert the stored bytes. So it was decided to not allow mixing of PowerPC or Intel code at all. There would only be a tiny bit of translation at the point when a PowerPC application called into the system.

If an application had plug-ins that might still be written in PowerPC code, it could not load them. You had to run a PowerPC version of the application to run your PowerPC plug-ins in, or the Intel version to run your Intel plug-ins (Of course, there were universal binaries that packaged those two versions up in the same file).

Though the QuickTime media playback library found a nice workaround for this during another switch, from 32-bit to 64-bit: It simply launches a separate, hidden background process that is 32 bit. That process can load any old legacy plug-ins, QuickTime running in a 64-bit application can pipe the data to be en-/decoded to that process, and that process sends it back when it’s done. This is not optimal, but can be surprisingly fast because it uses Unix shared memory and Mach messages.