iOS

There are 13 entries in this Category.

Auto Layout: How to do percentage-based layouts

I recently had to implement a two-directional slider (I.e. a box with an indicator that can go anywhere). I wanted to do it using modern auto layout, and I needed it to resize properly on rotation without me having to change internal variables.

That meant that the position of the slider knob would have to be specified as a percentage (well, fraction) in the multiplier of the constraints, so that whatever size the slider UIView had, it would adapt.

My first attempt was to simply specify the position as a multiple of the height. So [self.indicatorView.centerXAnchor.constrainToAnchor: self.widthAnchor multiplier: 0.0].active = YES would be left, and multiplier: 1.0 would be right (and analogously for the Y direction).

That worked fine, but had the problem that the indicator could end up “tucked under” the left or right edges. I tried using UIView‘s layoutInsets, but that didn’t work either. In the end, I would have to manually add the fraction that my indicator’s width corresponded to to the ends to avoid that from happening. Might as well use pixels, then.

The autolayout margin guides

Then I remembered I could just add additional UILayoutGuides (MacOS has NSLayoutGuides) to define the margins I wanted to keep clear, then define another layout guide relative to those for the actual draggable area, relative to which I could constrain my indicator.

So first I built 4 guides that were pinned to the edges, had a width (resp. height) of 0.5 × indicatorView.widthAnchor (resp. heightAnchor) and a height/position (resp. width/position) same as the slider view.

Now we had the margins. Then I added a 5th guide that covered the draggable area inside those guides. Then took the old constraints and made them relative to this guide instead of the entire view.

That didn’t work. The height starts at 0, so if used as a position, it would always end up in the upper left. And if I added a constant the size of the margins, I’d have something that wouldn’t update when the view resized again. Might as well use pixels, then.

Drag area and indicator position layout guide (in blue)

Then it struck me: Why not just add another guide? The guide is pinned to the upper left of the draggable area, and its width/height are percentages of the draggable area’s height. I can now set the multiplier on the width/height constraints to my slider percentages, and the lower right corner of this 6th “indicator position” guide would be exactly where I want the indicator to be.

So I just change this guide’s multipliers when the indicator moves, and bind the indicator view’s center to the bottom and right anchors of the indicator position guide, and it all works!

Note

You may note that I keep talking about changing the multiplier on constraints. Yeah, that’s not really possible, the only thing on a constraint that can change is the constant (well, and the identifier, but that would ruin the joke).

So yeah, wherever you read that, what I do is remove and recreate the constraint. Sadly, constraints do not have a -removeFromSuperview method, so what I really have to do is walk from a constraint’s firstItem and secondItem property up to their common ancestor and tell it to remove the constraint (if they are views … if they are guides, that means they’re constraints on self or one of its superviews).

Microsoft supports UIKit

iPhoneOnWindows

This week’s Build conference held a big surprise: Microsoft announced that they’ve built a UIKit compatibility layer for their various flavours of Windows.

Now I’m mainly a Mac developer and only hear of Windows things from friends and colleagues at the moment (the last time I did Windows work was around Windows XP), but my impression so far was that MS was frantically searching for a new API.

I don’t remember all occurrences, but I remember them announcing Silverlight, and .NET with WPF, and Windows RT that only supported the new APIs, and all sorts of things to then cancel them again.

So my impression as an outsider is that new APIs weren’t trustworthy and MS would always fall back to supporting their old API main-line that they carry around for compatibility reasons anyway.

Announcing UIKit and Android support actually makes a lot of sense in that context:

Although it appears to acknowledge that Windows Phone really didn’t take off, it does solve the catch-22 that MS found themselves in: Lack of apps. In an ideal case, they’ll now get all iOS apps Apple sells, plus the ones Apple rejected for silly reasons, plus those Android apps that iOS users long for.

If this gambit pays off, MS could leap-frog Apple *and* Android.

It also increases trust among developers who are sticking to ancient API: iOS and Android are the only modern APIs that Microsoft could implement that developers would confidently develop against after all these false starts, because even if MS dropped support for them, they’d still have the entire iOS/Android ecosystem to deploy against. So coding against UIKit for Windows Phone is a reasonably safe investment.

Swift

Of course, the elephant in the room here is Apple’s recent move to Swift. Now, given that Apple’s frameworks still all seem to be Objective-C internally (even WatchKit), I don’t think MS have missed the train. They might even pick up some Swift critics that are jumping Apple’s ship by supporting Objective-C.

But Swift damages the long-term beauty of MS’s “just call native Windows API from Objective-C” story. They will have to bridge their API to Swift (like Apple does with some of their C-based API right now), instead of getting people to use more and more classic Windows API in their Cocoa apps until the code won’t run on iOS anymore.

Still, that’s a small aesthetic niggle. MS already have a code-generator back-end that they can plug any parser onto, and Swift doesn’t appear to be a particularly difficult language to parse. In any event, parsers are easier than good code generation. For MS to create a Swift compiler is a solved problem, and I’d be surprised if they weren’t already working on it.

Of course, if MS had known about Swift when they started their UIKit for Windows, would they still have written it in Objective-C? Or would they have just written it in Swift with a bridging header?

So given the situation MS have managed to get themselves into, this sounds like it might be a viable solution to survive and, maybe, even come back from again. Still, it is an acknowledgement of how MS has fallen, that they need to implement a competitor’s API on their platform.

How Drawing on iOS Works

Someone on Stack Overflow recently asked about the various drawing APIs on iOS, and what the difference between using CALayers directly or using them indirectly through UIViews is, and how CoreGraphics (aka Quartz) fits into the equation. Here is the answer I gave:

The difference is that UIView and CALayer essentially deal in fixed images. These images are uploaded to the graphics card (if you know OpenGL, think of an image as a texture, and a UIView/CALayer as a polygon showing such a texture). Once an image is on the GPU, it can be drawn very quickly, and even several times, and (with a slight performance penalty) even with varying levels of alpha transparency on top of other images.

CoreGraphics (or Quartz) is an API for generating images. It takes a pixel buffer (again, think OpenGL texture) and changes individual pixels inside it. This all happens in RAM and on the CPU, and only once Quartz is done, does the image get “flushed” back to the GPU. This round-trip of getting an image from the GPU, changing it, then uploading the whole image (or at least a comparatively large chunk of it) back to the GPU is rather slow. Also, the actual drawing that Quartz does, while really fast for what you are doing, is way slower than what the GPU does.

That’s obvious, considering the GPU is mostly moving around unchanged pixels in big chunks. Quartz does random-access of pixels and shares the CPU with networking, audio etc. Also, if you have several elements that you draw using Quartz at the same time, you have to re-draw all of them when one changes, then upload the whole chunk, while if you change one image and then let UIViews or CALayers paste it onto your other images, you can get away with uploading much smaller amounts of data to the GPU.

When you don’t implement -drawRect:, most views can just be optimized away. They don’t contain any pixels, so can’t draw anything. Other views, like UIImageView, only draw a UIImage (which, again, is essentially a reference to a texture, which has probably already been loaded onto the GPU). So if you draw the same UIImage 5 times using a UIImageView, it is only uploaded to the GPU once, and then drawn to the display in 5 different locations, saving us time and CPU.

When you implement -drawRect:, this causes a new image to be created. You then draw into that on the CPU using Quartz. If you draw a UIImage in your drawRect, it likely downloads the image from the GPU, copies it into the image you’re drawing to, and once you’re done, uploads this second copy of the image back to the graphics card. So you’re using twice the GPU memory on the device.

So the fastest way to draw is usually to keep static content separated from changing content (in separate UIViews/UIView subclasses/CALayers). Load static content as a UIImage and draw it using a UIImageView and put content generated dynamically at runtime in a drawRect. If you have content that gets drawn repeatedly, but by itself doesn’t change (I.e. 3 icons that get shown in the same slot to indicate some status) use UIImageView as well.

One caveat: There is such a thing as having too many UIViews. Particularly transparent areas take a bigger toll on the GPU to draw, because they need to be mixed with other pixels behind them when displayed. This is why you can mark a UIView as “opaque”, to indicate to the GPU that it can just obliterate everything behind that image.

If you have content that is generated dynamically at runtime but stays the same for the duration of the application’s lifetime (e.g. a label containing the user name) it may actually make sense to just draw the whole thing once using Quartz, with the text, the button border etc., as part of the background. But that’s usually an optimization that’s not needed unless the Instruments app tells you differently.

Cocoa and the Builder Pattern

There’s been a nice discussion about the Builder pattern on Twitter today. The Builder pattern is a nice tool to have, particularly because it addresses a few common problems.

What Builder Pattern?

In short, the Builder pattern is a pattern where you have one object that you configure that then creates another object based on that configuration. The nice thing here is that you can first build your object step by step, like you’d e.g. do with NSMutableString, but then the actual construction of the object happens in one go. Very handy for immutable objects.

Usually, a setter for a Builder object returns self, like retain or autorelease do. That way, you can create something in Java or C++ that almost looks like Objective C:

Image theImage = (new Image.Builder)->SetWidth(100)->SetHeight(80)->SetDepth(8)->Build();

Where the Build() method releases the builder and returns the actual, immutable Image object.

Extending init methods

When you add a parameter to an initializer in Objective-C, it is annoying. You usually add the parameter to the initializer, then create a compatibility version with the old method’s name that calls the newer version with a default value for the extra parameter.

Java and C++ have solved that problem by allowing you to specify default values for parameters, but they don’t maintain binary stability that way. If you add a parameter, you still have to recompile, but at least you don’t need to change your code.

I guess one fix would be if ObjC supported default arguments to a parameter that would simply result in the creation of a second version of this initializer with the label and parameter removed:

-(id) initWithBanana: (NSBanana*)theBanana curvature: (CGFloat)curvature = 5
{
    // magic happens here
}

Would be the same as writing:

-(id) initWithBanana: (NSBanana*)theBanana curvature: (CGFloat)curvature
{
    // magic happens here
}


-(id) initWithBanana: (NSBanana*)theBanana
{
    return [self initWithBanana: theBanana curvature: 5];
}

Of course, you’d still need at least one parameter, because ObjC has no way of knowing what part of the message is the name, and what is the label for the second (for init there could be special code, I guess, but what for a -exfoliateCow:withSpeed: method?). And defaulting to -initWithBanana if the first parameter has a default is obviously not always desirable either. It would solve the annoyance of telescoping constructors, at the least.

The Builder pattern doesn’t have this problem. Each parameter has a setter that you use to set it. A new builder could have defaults for all parameters when it is created. Then you change the ones you want to customize, and call -build on it to get the new object. If a new setter is added, that’s fine. You don’t call it, you get the default. The maintainers only add the one setter, no compatibility method needed.

Thread safety and immutable objects

The easiest way to get thread safety is to prohibit data from changing. If data is immutable, there is nothing to be synchronized between threads,and no need for one thread to wait for the other. However, immutable objects are also annoying, as they need to be fully specified in their init method.

A case where this is a problem in Cocoa is NSImage. NSImage is an immutable object by convention, but not actually. It is an object that has its own builder built in. You are expected to know that, for an NSImage to be thread safe, you are expected to create it, set its attributes, draw something in it, and then stop messing with it, treating it as an immutable, read-only object from then on.

The problem is, nobody enforces it. NSImage is a perfectly mutable object, with setters and getters. There is no exception thrown when you violate this verbal contract. Of course Apple could have added a “makeImmutable” method to NSImage that causes those exceptions to happen when you try to edit an instance. But then they’d have to add code to each setter that errors (Or at the least use some aspect-oriented-programming mechanism to inject code before every setter that performs this check automatically).

The Builder pattern would solve that: They can have a huge, private constructor on NSImage that changes with every release to add new parameters and initialize that immutable object, while the Builder would present a stable and convenient API to all clients. There would not be any setters on NSImage.

But it is ugly…

Admittedly, it feels a bit inelegant to build an object that builds an object. The way NSImage works is so much nicer. But Mike Lee actually offers a neat approach that works almost as well:

Just pass in a list of properties. This could be a dictionary of properties, or even just a variadic argument list like -dictionaryWithObjectsAndKeys: takes it. You’d define a constant for each possible property (that way if you mis-type the parameter name the compiler tells you, which you don’t get from a raw string). Internally, this constant could even hold the actual name of the property, even if it is never exposed as a method in the public header. So, all your constructor would do is call [self setValue: properties[key] forKey: key] in a loop, once for every element.

You get the same effect as labeled parameters (if you put the keys first, even more so). You also get the same effect as optional parameters. The binary ABI never changes, so that’s good, too. The only downside is you need to pass every parameter as an object, and you lose compile-time type checks. OTOH you gain compile-time errors when you try to change the object after creating it (because it declares no setters).

Is it worth all that work?

Admittedly, I haven’t had to add parameters to the init method of a public class that often. Nonetheless, I think Mike’s approach and the Builder pattern both are useful things to keep in mind if you ever come up with a class that can be created in numerous configurations (and is likely to gain new properties in the future) but should then be immutable. Class clusters and plug-in classes seem like a typical place where you might need this.

Are your rectangles blurry, pale and have rounded corners?

One common problem with drawing code in Cocoa (iOS and Mac OS X) is that people have trouble getting crisp, sharp lines. Often this problem ends up as a question like “How do I get a 1-pixel line from NSBezierPath” or “Why are my UIBezierPath lines fuzzy and transparent” or “Why are there little black dots at the corners of my NSRect”.

The problem here is that coordinates in Quartz are not pixels. They are actually “virtual” coordinates that form a grid. At 1x resolution (i.e. non-Retina), these coordinates, using a unit commonly referred to as “points” to distinguish them from act pixels on a screen (or on a printer!), lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled:

filled_rectangle_between_pixels

But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:

coordinates_between_pixels

So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in 50% grey. For the corner pixels, which are 1/4th black and 3/4ths black, you get lighter/darker shades accordingly:

line_drawing_between_pixels

This is where your washed-out drawings, half-transparent and too-wide lines come from. The fix is now obvious: Don’t draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:

coordinates_on_pixels

Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you’re e.g. clipping to the rectangle, this will cut off the lower right:

coordinates_on_pixels_cut_off

Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you’re drawing with transparency).

Note that this only holds true for 1x screens. 2x Retina screens exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels needed for a 1 point wide line:

coordinates_between_pixels_retina

However, you still have this problem if you want to draw a line that is even thinner (e.g. 0.5 points or 1 device pixel). Also, since Apple may in the future introduce other Retina screens where e.g. every pixel could be made up of 9 Retina pixels (3x), you should really not rely on fixed numbers. Instead, there are now API calls to convert rectangles to “backing aligned”, which do this for you, no matter whether you’re running 1x, 2x, or a fictitious 3x. Otherwise, you may be moving things off pixels that would have displayed just fine:

coordinates_on_and_between_pixels_future_retina

And that’s pretty much all there is to sharp drawing with Quartz.

Cocoa: String comparisons and the optimizer

Woman in front of a mirrorA while ago, a friend came to me with this bit of code:

NSString *a = @"X";
NSString *b = @"X";
if( a == b )
{
    NSLog(@"Same!");
}

“How come it works with the == operator? Didn’t you have to call isEqualToString: in the old days?”

Before we answer his question, let’s go into what he implicitly already knew:

Why wouldn’t == work on objects?

By default, C compares two pointers by simply comparing the addresses. That is logical, fast, and useful. However, it is also a little annoying with strings, arrays and other collections, because you may have two collections that still contain identical objects.

If you have the phone books from 2013 and 2014, do you just want to compare the numbers 2013 and 2014 and be told: “No that’s not the same phone book”, or are you actually interested in whether their contents are different? If nobody’s phone book entry changed in a particular city, wouldn’t you want to know that and save yourself the trip to the phone company to pick up a new phone book?

Since all Objective-C objects are pointers, the only way to do more than compare the addresses needs some special syntax. So NSString offers the isEqualToString: method, which, if the pointers do not match, goes on to check their contents. It compares each character to the same position in the second string to find out whether even though they’re not the same slip of paper, they at least have the same writing on it.

So why does the code above think they’re the same?

After all that, why does the code above think they are the same object after all? Doesn’t a point to the @"X" in the first line, b to the @"X" in the second line?

That is what is conceptually true, what a naïve compiler would do. However, most compilers these days are smart. Compilers know that a string constant can never change. And they see that the contents of both string objects pointed to by a and b are the same. So they just create one constant object to save memory, and make both point to the same object.

There is no difference for your program’s functionality. You get an immutable string containing “X”.

However, note that there is no guarantee that this will happen. Some compilers perform this optimization for identical strings in one file, but not across files. Others are smarter, and give you the same object even across source files in the same project. On some platforms, the string class keeps track of all strings it has created so far, and if you ask for one it already did, gives you that, to save RAM. In most, if you load a dynamic library (like a framework) it gets its own copy of each string, because the compiler can not know whether the surrounding application actually already has that string (it might be loaded into any arbitrary app.

Mutability is important

This is a neat trick compilers use that works only with immutable objects, like NSString or NSDictionary. It does not work with NSMutableString. Why? Because if it put the same mutable string into a and b, and you call appendString: on a, b would change as well. And of course we wouldn’t want to change our program’s behaviour that way.

For the same reason, NSString may be optimized so that copy is implemented like this:

-(id) copy
{
    return [self retain];
}

That’s right. It gives you the same object, just with the reference count bumped, because you can’t change this string once it has been created. From the outside it looks the same. Copy gives you the object with its retain count bumped, so you can release it safely once you’re done with it. It behaves just like a copy. The only hint you may have that this happened is that instead of an NSString with reference count owned solely by you, you get one with a reference count of 2 whose ownership you share with another object. But that’s what shared ownership is about after all.

Of course, this optimization doesn’t work with NSMutableString.

What I take away from this

So if someone walks up to you and shows you code that uses the == operator where it should really be checking for content equality, and argues that “it works, so it is correct”, now you’ll know why it just happens to work:

It’s a fluke, and if Apple decides to switch compilers or finds a better way to optimize performance or memory usage that requires them to no longer perform this optimization, they might just remove it, and this code will break, because it relied on a side effect. And we don’t want our code to break.

The universal catch-all singleton

Personified application delegate creating objects by cracking eggsOne bad habit I see time and time again is filling up your application delegate with junk that has no business being in there.

Avoid putting stuff in your app delegate.

What is the application delegate?

By definition, the application delegate is a controller. Most model classes are standard Apple classes. Those that aren’t are slightly smarter collections of these classes. Most view classes just display the model and forward a few IBActions to the controller in a dynamic way, so are inherently reusable as well (even if not always the particular arrangement of their instances).

The controllers, on the other hand, aren’t really reusable. They glue all this stuff together. They’re what makes your application your application, along maybe with a few bindings. So, again by definition, the application delegate is the least reusable part of an application. It’s the part that kicks everything off, creates all the other controllers and has them load the model and views.

Reusability is best!

The whole point behind OOP was to reduce bugs, speed up development, and help structure your code by keeping it grouped in reusable components. The best way to maintain this separation of components and permit re-using parts of it in different projects, is to keep the boundaries between components (e.g. objects) clean and distinct.

Objects have a clear hierarchy. The top creates objects lower down and gives it the information they need to operate correctly for your application. Nobody reaches up the hierarchy, except maybe to notify whoever created it of occurrences in the form of delegate messages. That way, the more application-specific your code gets, the fewer other objects know about it. The further down you get, the more reusable.

Moving operations or instance variables that are shared by several objects in your application into the application delegate, and having other objects reach straight up through NSApplication.sharedApplication.delegate to get at it, goes head-on against this desire, and turns your carefully separated code into an inseparable glob of molten sludge. Suddenly *everybody* includes the most application-specific header your application contains.

Don’t lie to yourself

The application delegate is one of the singletons every application contains. Its name is misleading and fuzzy. If you see it as a place to hold code “relevant to the application as a whole”, there is pretty much nothing that is off-topic for it. It is the universal, catch-all singleton.

So why not be honest to yourself: Whenever you add code to the application delegate, and you’re not just doing it to react to a delegate method from NSApplication and create a controller to perform the actual action in response, what you are really doing is create a singleton.

As we all know, singletons have a use, but having many singletons is a code smell. So avoid them if you can, but if you feel you can’t, be honest to yourself and actually make it a separate singleton (or find one whose purpose these operations fit).

Just say no to the application delegate as the universal singleton.

Update: If this hasn’t convinced you, here’s another blogger with a more CoreData-centric discussion of the issue, coming to the same conclusion: Don’t abuse the app delegate.

Mapping Strings to Selectors

MappingStringsToSelectorsSketchBack in the old days of Carbon, when you wanted to handle a button press, you set up a command ID on your button, which was a simple integer, and then implemented a central command-handling function on your window that received the command ID and used a big switch statement to dispatch it to the right action.

In Cocoa, thanks to message sending and target/action, we don’t have this issue anymore. Each button knows the message to send and the object to send it to, and just triggers the action directly. No gigantic switch statement.

However, we still have a similar issue in key-value observing: When you call addObserver:forKeyPath:options:context, all key-value-observing notifications go through the one bottleneck: observeValueForKeyPath:ofObject:change:context:. So, to detect which property was changed, you have to chain several if statements together and check whether the key path is the one you registered for (and check the ‘context’ parameter so you’re sure this is not just a KVO notification your superclass or subclass requested), and then dispatch it to a method that actually reacts to it.

It would be much nicer if Apple just called a method that already contained the name of the key-value-path, wouldn’t it? E.g. if the key-path you are observing is passwordField.text, why doesn’t it call observerValueOfPasswordField_TextOfObject:change:context:?

But there is a common Cocoa coding pattern that can help us with this: Mapping strings to selectors. The centerpiece of this method is the NSSelectorFromString function. So imagine you just implemented observeValueForKeyPath:ofObject:change:context: like this:

-(void) observeValueForKeyPath: (NSString*)keyPath ofObject: (id)observedObject change: (NSDictionary*)changeInfo context: (void*)context
{
    NSString *sanitizedKeyPath = [keyPath stringByReplacingOccurrencesOfString: @"." withString: @"_"];
    NSString *selName = [NSString stringWithFormat: @"observeValueOf%@OfObject:change:context:", sanitizedKeyPath];
    SEL      action = NSSelectorFromString(selName);
    if( [self respondsToSelector: action] )
    {
        NSInvocation * inv = [NSInvocation invocationWithMethodSignature: [self methodSignatureForSelector: action]];
        [inv setTarget: self]; // Argument 0
        [inv setSelector: action]; // Argument 1
        [inv setArgument: &observedObject atIndex: 2];
        [inv setArgument: &changeInfo atIndex: 3];
        [inv setArgument: &context atIndex: 4];
        [inv invoke]
    }
    else
        [super observeValueForKeyPath: keyPath ofObject: observedObject change: changeInfo context: context];
}

We build a string that includes the name of the key-path, turn it into an actual selector, and then use -performSelector:withObject:, or in more complex cases like this one NSInvocation, to actually call it on ourselves.

For cases that have no clear mapping like this, you can always maintain an NSMutableDictionary where the key is whatever string your input is and the value the selector name for your output, and then use that to translate between the two. When you make whatever call equivalent to addObserver: you have in that case, it would add an entry to the dictionary. That’s probably how NSNotificationCenter does it internally.

Update:
As Peter Hosey pointed out, another good use case for this pattern is -validateMenuItem: where one could turn the menu item’s action into a string and concatenate that with ‘validate’.

WWDC 2013 predictions

MacMiniFirstGenSketchJust like last year, I thought I’d note down my thoughts about WWDC, make some predictions, and then after WWDC go back here and compare it to what actually got announced.

  • I think it’s Mac Pro time. As in, not necessarily a new Mac Pro, but rather a successor. My guess is a beefed-up Mac mini (maybe a little larger). Several separate Thunderbolt lanes to make up for the missing room for PCI cards.
  • We will see the next iOS. No idea what will be in it.
  • Rumors around the ‘net say Apple pulled people off MacOS to finish iOS, so while I think we’ll get a MacOS announcement, we might not get a new MacOS right away. If we do, it’ll probably be a simpler update. A few iOS features, a few incremental improvements. Hopefully we’ll finally see a coherent story emerge instead of the weird Launch Pad/Finder dichotomy.
  • I think there will be other Mac updates. All Macs are pretty much overdue for an update these days. But I’m not sure we’ll see updates for all of them at WWDC.
  • I wish for a MacBook Air 11″ Retina. Something fast, portable with room for more RAM. I think it’s difficult – Retina displays of an MBA size are pretty much like an iPad display, but a Mac needs a more powerful GPU than an iPad because it runs several apps at once and can attach large external displays – but I think we’re close enough to get these now. The main issue as I understand it is power consumption. The MBA 11 doesn’t provide much room for a battery to begin with, and more pixels and a suitable GPU need more power as well. But Intel’s new CPUs supposedly use less power, so maybe that provides the savings needed to have power to squander on a Retina display and GPU.

That’s all I have. Feel free to leave your ideas and thoughts in the comments.

Common Cocoa Coding Patterns: NSError returns

A sad iMac

Error reporting wasn’t something Cocoa did in much detail in the early days. There were nil return values instead of objects, or a BOOL that was YES on success.

C didn’t have exceptions, and by the time ObjC got “fake” exceptions based on the longjmp() mechanism, all wrapped up nicely in NSException and NS_DURING/NS_HANDLER/NS_ENDHANDLER macros, most of the framework API had already been written and it would have been too much work to make all of it work with exceptions and adapt all applications to watch for exceptions in error situations. So they were reserved mostly for Distributed Objects (where an additional error return wasn’t possible) and programming errors.

But with the release of Safari and WebKit in 2003, someone at Apple realized they needed more detailed and standardized error information, and introduced the NSError class. NSError is a simple container object for any error code you might encounter on your Apple device, plus a dictionary for any other info like an error message.

Error codes are not unique across all the libraries, frameworks and system services that are on your device, so each error code is also identified by its domain, an arbitrary string constant. For example, Mac OS X offers the NSCocoaErrorDomain, NSOSStatusErrorDomain, NSPOSIXErrorDomain and NSMachErrorDomain error domains, which let you wrap Cocoa framework error codes, Carbon/CoreServices error codes, standard Unix error codes and kernel errors unambiguously, all by wrapping them in an NSError. And you can make up your own for your application, library, and even for a single class. Whatever logical unit of encapsulation makes the most sense.

Most NSErrors are returned as a return parameter. This has the advantage that methods that already have a return value can still be implemented as they were before the arrival of NSError, and can return nil to have an entire chained expression collapse. E.g., a fictional

NSError    theErr = nil;
ULIObject  obj = [[[ULIObject allocGivingError: &theErr] initGivingError: &theErr] autoreleaseGivingError: &theErr];

can fail at any of the three calls and return nil, and none of the subsequent messages will be sent, nor will they touch theErr.

However, one thing Apple tells us is that we shouldn’t look at theErr above unless obj is nil. Why is that? Well, imagine a possible implementation of our fictional autoreleaseGivingError::

-(id) autoreleaseGivingError: (NSError**)outError
{
    ULIAutoreleasePool* currentPool = [ULIAutoreleasePool _currentPoolGivingError: outError];
    if( currentPool == nil )
        currentPool = [NSAutoreleasePool _createBottomPool];
    if( currentPool == nil )    // Still couldn't create?
    {
        // Hand on whatever error _currentPoolGivingError: had.
        return nil;
    }
    
    [currentPool addObject: self];
    return self;
}

Our fictional internal _currentPoolGivingError: method here might return nil and give us an NSError when there is no pool in place yet.

But in the most common case, we will be able to recover from this error by creating the pool 1.

So in most cases, we’ll just create the pool and add the object to it. If callers look at theErr in such a situation, they will see the error object put there by _currentPoolGivingError:, from which we recovered. So they will see an error where none occurred.

And that, kids, is why you always check the return value, and not just the error parameter.

1) This is nonsense in real life, because nobody would ever release this bottom pool and we’d have a quiet leak, but let’s just assume in our example’s world ULIRunLoop will release this bottom pool once it regains control, as part of a lazy-allocation-of-pools scheme.