Objective-C

There are 38 entries in this Category.

Auto Layout: How to do percentage-based layouts

I recently had to implement a two-directional slider (I.e. a box with an indicator that can go anywhere). I wanted to do it using modern auto layout, and I needed it to resize properly on rotation without me having to change internal variables.

That meant that the position of the slider knob would have to be specified as a percentage (well, fraction) in the multiplier of the constraints, so that whatever size the slider UIView had, it would adapt.

My first attempt was to simply specify the position as a multiple of the height. So [self.indicatorView.centerXAnchor.constrainToAnchor: self.widthAnchor multiplier: 0.0].active = YES would be left, and multiplier: 1.0 would be right (and analogously for the Y direction).

That worked fine, but had the problem that the indicator could end up “tucked under” the left or right edges. I tried using UIView‘s layoutInsets, but that didn’t work either. In the end, I would have to manually add the fraction that my indicator’s width corresponded to to the ends to avoid that from happening. Might as well use pixels, then.

The autolayout margin guides

Then I remembered I could just add additional UILayoutGuides (MacOS has NSLayoutGuides) to define the margins I wanted to keep clear, then define another layout guide relative to those for the actual draggable area, relative to which I could constrain my indicator.

So first I built 4 guides that were pinned to the edges, had a width (resp. height) of 0.5 × indicatorView.widthAnchor (resp. heightAnchor) and a height/position (resp. width/position) same as the slider view.

Now we had the margins. Then I added a 5th guide that covered the draggable area inside those guides. Then took the old constraints and made them relative to this guide instead of the entire view.

That didn’t work. The height starts at 0, so if used as a position, it would always end up in the upper left. And if I added a constant the size of the margins, I’d have something that wouldn’t update when the view resized again. Might as well use pixels, then.

Drag area and indicator position layout guide (in blue)

Then it struck me: Why not just add another guide? The guide is pinned to the upper left of the draggable area, and its width/height are percentages of the draggable area’s height. I can now set the multiplier on the width/height constraints to my slider percentages, and the lower right corner of this 6th “indicator position” guide would be exactly where I want the indicator to be.

So I just change this guide’s multipliers when the indicator moves, and bind the indicator view’s center to the bottom and right anchors of the indicator position guide, and it all works!

Note

You may note that I keep talking about changing the multiplier on constraints. Yeah, that’s not really possible, the only thing on a constraint that can change is the constant (well, and the identifier, but that would ruin the joke).

So yeah, wherever you read that, what I do is remove and recreate the constraint. Sadly, constraints do not have a -removeFromSuperview method, so what I really have to do is walk from a constraint’s firstItem and secondItem property up to their common ancestor and tell it to remove the constraint (if they are views … if they are guides, that means they’re constraints on self or one of its superviews).

Microsoft supports UIKit

iPhoneOnWindows

This week’s Build conference held a big surprise: Microsoft announced that they’ve built a UIKit compatibility layer for their various flavours of Windows.

Now I’m mainly a Mac developer and only hear of Windows things from friends and colleagues at the moment (the last time I did Windows work was around Windows XP), but my impression so far was that MS was frantically searching for a new API.

I don’t remember all occurrences, but I remember them announcing Silverlight, and .NET with WPF, and Windows RT that only supported the new APIs, and all sorts of things to then cancel them again.

So my impression as an outsider is that new APIs weren’t trustworthy and MS would always fall back to supporting their old API main-line that they carry around for compatibility reasons anyway.

Announcing UIKit and Android support actually makes a lot of sense in that context:

Although it appears to acknowledge that Windows Phone really didn’t take off, it does solve the catch-22 that MS found themselves in: Lack of apps. In an ideal case, they’ll now get all iOS apps Apple sells, plus the ones Apple rejected for silly reasons, plus those Android apps that iOS users long for.

If this gambit pays off, MS could leap-frog Apple *and* Android.

It also increases trust among developers who are sticking to ancient API: iOS and Android are the only modern APIs that Microsoft could implement that developers would confidently develop against after all these false starts, because even if MS dropped support for them, they’d still have the entire iOS/Android ecosystem to deploy against. So coding against UIKit for Windows Phone is a reasonably safe investment.

Swift

Of course, the elephant in the room here is Apple’s recent move to Swift. Now, given that Apple’s frameworks still all seem to be Objective-C internally (even WatchKit), I don’t think MS have missed the train. They might even pick up some Swift critics that are jumping Apple’s ship by supporting Objective-C.

But Swift damages the long-term beauty of MS’s “just call native Windows API from Objective-C” story. They will have to bridge their API to Swift (like Apple does with some of their C-based API right now), instead of getting people to use more and more classic Windows API in their Cocoa apps until the code won’t run on iOS anymore.

Still, that’s a small aesthetic niggle. MS already have a code-generator back-end that they can plug any parser onto, and Swift doesn’t appear to be a particularly difficult language to parse. In any event, parsers are easier than good code generation. For MS to create a Swift compiler is a solved problem, and I’d be surprised if they weren’t already working on it.

Of course, if MS had known about Swift when they started their UIKit for Windows, would they still have written it in Objective-C? Or would they have just written it in Swift with a bridging header?

So given the situation MS have managed to get themselves into, this sounds like it might be a viable solution to survive and, maybe, even come back from again. Still, it is an acknowledgement of how MS has fallen, that they need to implement a competitor’s API on their platform.

Handling keypresses in Cocoa games

WASDKeys

At first blush, Keyboard event handling for games in Cocoa seems easy: You add -acceptsFirstResponder and -becomeFirstResponder overrides to your custom game map view, then override -moveUp:, -moveDown:, -moveLeft: and -moveRight: to handle the arrow keys.

However, if you play a game like that, you’ll notice one big difference to most other games: It only ever accepts one keypress at a time. So if you’re holding down the up arrow key to have your character run forward, then quickly press the right arrow key to sidestep and obstacle, your character will stop in its tracks, as if you had released the up arrow key.

This makes sense for text entry, where you might accidentally still be holding down one character while another finger presses the next, but for a game this is annoying. You want to be able to chord arbitrary key combinations together.

I found a clever solution for game keyboard handling on the CocoaDev Wiki, but it’s a bit old and incomplete, so I thought I’d provide an updated technique:

The solution is to keep track of which key is down yourself. Override -keyDown and -keyUp to keep track of which keys are being held down. I’m using a C++ unordered_set for that, but an Objective-C NSIndexSet would work just as well:

@interface ICGMapView : NSView
{
	std::unordered_set<unichar>	pressedKeys;
}

@end

and in the implementation:

-(void)	keyDown:(NSEvent *)theEvent
{
	NSString	*	pressedKeyString = theEvent.charactersIgnoringModifiers;
	unichar			pressedKey = (pressedKeyString.length > 0) ? [pressedKeyString characterAtIndex: 0] : 0;
	if( pressedKey )
		pressedKeys.insert( pressedKey );
}


-(void)	keyUp:(NSEvent *)theEvent
{
	NSString	*	pressedKeyString = theEvent.charactersIgnoringModifiers;
	unichar			pressedKey = (pressedKeyString.length > 0) ? [pressedKeyString characterAtIndex: 0] : 0;
	if( pressedKey )
	{
		auto foundKey = pressedKeys.find( pressedKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}
}

Of course, you’ll also want to react to modifier keys, and like most games, you will want to treat them not as modifiers in a shortcut, but as regular keys, so people can press Command to fire, or so. That’s basically the same, just that you override -flagsChanged: and that there are no standard character constants for the modifier keys. So let’s just define our own:

// We need key codes under which to save the modifiers in our "keys pressed"
//	table. We must pick characters that are unlikely to be on any real keyboard.
//	So we pick the Unicode glyphs that correspond to the symbols on these keys.
enum
{
	ICGShiftFunctionKey			= 0x21E7,	// -> NSShiftKeyMask
	ICGAlphaShiftFunctionKey	= 0x21EA,	// -> NSAlphaShiftKeyMask
	ICGAlternateFunctionKey		= 0x2325,	// -> NSAlternateKeyMask
	ICGControlFunctionKey		= 0x2303,	// -> NSControlKeyMask
	ICGCommandFunctionKey		= 0x2318	// -> NSCommandKeyMask
};

-(void)	flagsChanged: (NSEvent *)theEvent
{
	if( theEvent.modifierFlags & NSShiftKeyMask )
	{
		pressedKeys.insert( ICGShiftFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGShiftFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSAlphaShiftKeyMask )
	{
		pressedKeys.insert( ICGAlphaShiftFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGAlphaShiftFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSControlKeyMask )
	{
		pressedKeys.insert( ICGControlFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGControlFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSCommandKeyMask )
	{
		pressedKeys.insert( ICGCommandFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGCommandFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}

	if( theEvent.modifierFlags & NSAlternateKeyMask )
	{
		pressedKeys.insert( ICGAlternateFunctionKey );
	}
	else
	{
		auto foundKey = pressedKeys.find( ICGAlternateFunctionKey );
		if( foundKey != pressedKeys.end() )
			pressedKeys.erase(foundKey);
	}
}

An alternative would be to just enlarge the numeric type used to store keys in your unordered_set. Instead of two-byte unichar values, you’d just pick uint32_t, and then define the constants as values that are out of range for an actual unichar, like 0xffff1234. If you’re using NSIndexSet, you’re lucky, it uses NSInteger, which is already larger.

Then add an NSTimer to your class that periodically checks whether there are any keys pressed, and if they are, reacts to them:

-(void) dispatchPressedKeys: (NSTimer*)sender
{
	BOOL	shiftKeyDown = pressedKeys.find(ICGShiftFunctionKey) != pressedKeys.end();
	for( unichar pressedKey : pressedKeys )
	{
		switch( pressedKey )
		{
			case 'w':
				[self moveUp: self fast: shiftKeyDown];
				break;
			...
		}
	}
}

Since your timer is polling at an interval here, and you can’t make that interval too fast because it’s the rate at which key repeats will be sent, it is theoretically possible that you would lose keypresses whose duration is shorter than your timer interval. To avoid that, you could store a struct in an array instead of just the keypress in a set. This struct would remember when the key was originally pressed down, and when the last key event was sent out.

That way, when the user begins holding down a key, you’d immediately trigger processing of this key once, and make note of when that happened. From then on, your -dispatchPressedKeys: method would check whether it’s been long enough since the last time it processed that particular key, and would send key repeats for each key that is due. As a bonus, when a key is released, you could also notify yourself of that.

You could even create “key event” objects of some sort to hand into your engine.

How Drawing on iOS Works

Someone on Stack Overflow recently asked about the various drawing APIs on iOS, and what the difference between using CALayers directly or using them indirectly through UIViews is, and how CoreGraphics (aka Quartz) fits into the equation. Here is the answer I gave:

The difference is that UIView and CALayer essentially deal in fixed images. These images are uploaded to the graphics card (if you know OpenGL, think of an image as a texture, and a UIView/CALayer as a polygon showing such a texture). Once an image is on the GPU, it can be drawn very quickly, and even several times, and (with a slight performance penalty) even with varying levels of alpha transparency on top of other images.

CoreGraphics (or Quartz) is an API for generating images. It takes a pixel buffer (again, think OpenGL texture) and changes individual pixels inside it. This all happens in RAM and on the CPU, and only once Quartz is done, does the image get “flushed” back to the GPU. This round-trip of getting an image from the GPU, changing it, then uploading the whole image (or at least a comparatively large chunk of it) back to the GPU is rather slow. Also, the actual drawing that Quartz does, while really fast for what you are doing, is way slower than what the GPU does.

That’s obvious, considering the GPU is mostly moving around unchanged pixels in big chunks. Quartz does random-access of pixels and shares the CPU with networking, audio etc. Also, if you have several elements that you draw using Quartz at the same time, you have to re-draw all of them when one changes, then upload the whole chunk, while if you change one image and then let UIViews or CALayers paste it onto your other images, you can get away with uploading much smaller amounts of data to the GPU.

When you don’t implement -drawRect:, most views can just be optimized away. They don’t contain any pixels, so can’t draw anything. Other views, like UIImageView, only draw a UIImage (which, again, is essentially a reference to a texture, which has probably already been loaded onto the GPU). So if you draw the same UIImage 5 times using a UIImageView, it is only uploaded to the GPU once, and then drawn to the display in 5 different locations, saving us time and CPU.

When you implement -drawRect:, this causes a new image to be created. You then draw into that on the CPU using Quartz. If you draw a UIImage in your drawRect, it likely downloads the image from the GPU, copies it into the image you’re drawing to, and once you’re done, uploads this second copy of the image back to the graphics card. So you’re using twice the GPU memory on the device.

So the fastest way to draw is usually to keep static content separated from changing content (in separate UIViews/UIView subclasses/CALayers). Load static content as a UIImage and draw it using a UIImageView and put content generated dynamically at runtime in a drawRect. If you have content that gets drawn repeatedly, but by itself doesn’t change (I.e. 3 icons that get shown in the same slot to indicate some status) use UIImageView as well.

One caveat: There is such a thing as having too many UIViews. Particularly transparent areas take a bigger toll on the GPU to draw, because they need to be mixed with other pixels behind them when displayed. This is why you can mark a UIView as “opaque”, to indicate to the GPU that it can just obliterate everything behind that image.

If you have content that is generated dynamically at runtime but stays the same for the duration of the application’s lifetime (e.g. a label containing the user name) it may actually make sense to just draw the whole thing once using Quartz, with the text, the button border etc., as part of the background. But that’s usually an optimization that’s not needed unless the Instruments app tells you differently.

Cocoa and the Builder Pattern

There’s been a nice discussion about the Builder pattern on Twitter today. The Builder pattern is a nice tool to have, particularly because it addresses a few common problems.

What Builder Pattern?

In short, the Builder pattern is a pattern where you have one object that you configure that then creates another object based on that configuration. The nice thing here is that you can first build your object step by step, like you’d e.g. do with NSMutableString, but then the actual construction of the object happens in one go. Very handy for immutable objects.

Usually, a setter for a Builder object returns self, like retain or autorelease do. That way, you can create something in Java or C++ that almost looks like Objective C:

Image theImage = (new Image.Builder)->SetWidth(100)->SetHeight(80)->SetDepth(8)->Build();

Where the Build() method releases the builder and returns the actual, immutable Image object.

Extending init methods

When you add a parameter to an initializer in Objective-C, it is annoying. You usually add the parameter to the initializer, then create a compatibility version with the old method’s name that calls the newer version with a default value for the extra parameter.

Java and C++ have solved that problem by allowing you to specify default values for parameters, but they don’t maintain binary stability that way. If you add a parameter, you still have to recompile, but at least you don’t need to change your code.

I guess one fix would be if ObjC supported default arguments to a parameter that would simply result in the creation of a second version of this initializer with the label and parameter removed:

-(id) initWithBanana: (NSBanana*)theBanana curvature: (CGFloat)curvature = 5
{
    // magic happens here
}

Would be the same as writing:

-(id) initWithBanana: (NSBanana*)theBanana curvature: (CGFloat)curvature
{
    // magic happens here
}


-(id) initWithBanana: (NSBanana*)theBanana
{
    return [self initWithBanana: theBanana curvature: 5];
}

Of course, you’d still need at least one parameter, because ObjC has no way of knowing what part of the message is the name, and what is the label for the second (for init there could be special code, I guess, but what for a -exfoliateCow:withSpeed: method?). And defaulting to -initWithBanana if the first parameter has a default is obviously not always desirable either. It would solve the annoyance of telescoping constructors, at the least.

The Builder pattern doesn’t have this problem. Each parameter has a setter that you use to set it. A new builder could have defaults for all parameters when it is created. Then you change the ones you want to customize, and call -build on it to get the new object. If a new setter is added, that’s fine. You don’t call it, you get the default. The maintainers only add the one setter, no compatibility method needed.

Thread safety and immutable objects

The easiest way to get thread safety is to prohibit data from changing. If data is immutable, there is nothing to be synchronized between threads,and no need for one thread to wait for the other. However, immutable objects are also annoying, as they need to be fully specified in their init method.

A case where this is a problem in Cocoa is NSImage. NSImage is an immutable object by convention, but not actually. It is an object that has its own builder built in. You are expected to know that, for an NSImage to be thread safe, you are expected to create it, set its attributes, draw something in it, and then stop messing with it, treating it as an immutable, read-only object from then on.

The problem is, nobody enforces it. NSImage is a perfectly mutable object, with setters and getters. There is no exception thrown when you violate this verbal contract. Of course Apple could have added a “makeImmutable” method to NSImage that causes those exceptions to happen when you try to edit an instance. But then they’d have to add code to each setter that errors (Or at the least use some aspect-oriented-programming mechanism to inject code before every setter that performs this check automatically).

The Builder pattern would solve that: They can have a huge, private constructor on NSImage that changes with every release to add new parameters and initialize that immutable object, while the Builder would present a stable and convenient API to all clients. There would not be any setters on NSImage.

But it is ugly…

Admittedly, it feels a bit inelegant to build an object that builds an object. The way NSImage works is so much nicer. But Mike Lee actually offers a neat approach that works almost as well:

Just pass in a list of properties. This could be a dictionary of properties, or even just a variadic argument list like -dictionaryWithObjectsAndKeys: takes it. You’d define a constant for each possible property (that way if you mis-type the parameter name the compiler tells you, which you don’t get from a raw string). Internally, this constant could even hold the actual name of the property, even if it is never exposed as a method in the public header. So, all your constructor would do is call [self setValue: properties[key] forKey: key] in a loop, once for every element.

You get the same effect as labeled parameters (if you put the keys first, even more so). You also get the same effect as optional parameters. The binary ABI never changes, so that’s good, too. The only downside is you need to pass every parameter as an object, and you lose compile-time type checks. OTOH you gain compile-time errors when you try to change the object after creating it (because it declares no setters).

Is it worth all that work?

Admittedly, I haven’t had to add parameters to the init method of a public class that often. Nonetheless, I think Mike’s approach and the Builder pattern both are useful things to keep in mind if you ever come up with a class that can be created in numerous configurations (and is likely to gain new properties in the future) but should then be immutable. Class clusters and plug-in classes seem like a typical place where you might need this.

Are your rectangles blurry, pale and have rounded corners?

One common problem with drawing code in Cocoa (iOS and Mac OS X) is that people have trouble getting crisp, sharp lines. Often this problem ends up as a question like “How do I get a 1-pixel line from NSBezierPath” or “Why are my UIBezierPath lines fuzzy and transparent” or “Why are there little black dots at the corners of my NSRect”.

The problem here is that coordinates in Quartz are not pixels. They are actually “virtual” coordinates that form a grid. At 1x resolution (i.e. non-Retina), these coordinates, using a unit commonly referred to as “points” to distinguish them from act pixels on a screen (or on a printer!), lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled:

filled_rectangle_between_pixels

But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:

coordinates_between_pixels

So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in 50% grey. For the corner pixels, which are 1/4th black and 3/4ths black, you get lighter/darker shades accordingly:

line_drawing_between_pixels

This is where your washed-out drawings, half-transparent and too-wide lines come from. The fix is now obvious: Don’t draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:

coordinates_on_pixels

Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you’re e.g. clipping to the rectangle, this will cut off the lower right:

coordinates_on_pixels_cut_off

Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you’re drawing with transparency).

Note that this only holds true for 1x screens. 2x Retina screens exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels needed for a 1 point wide line:

coordinates_between_pixels_retina

However, you still have this problem if you want to draw a line that is even thinner (e.g. 0.5 points or 1 device pixel). Also, since Apple may in the future introduce other Retina screens where e.g. every pixel could be made up of 9 Retina pixels (3x), you should really not rely on fixed numbers. Instead, there are now API calls to convert rectangles to “backing aligned”, which do this for you, no matter whether you’re running 1x, 2x, or a fictitious 3x. Otherwise, you may be moving things off pixels that would have displayed just fine:

coordinates_on_and_between_pixels_future_retina

And that’s pretty much all there is to sharp drawing with Quartz.

The fast road to unit tests with Xcode

Supposedly Xcode has unit test support. I’ve never seen that work for more than two Xcode revisions. So I’ve come up with a minimal unit test scheme that works reliably.

1) Add a “command line tool” target (Foundation application, C++ application, whatever makes sense). Put your test code in its main.m or whatever. After each test, print out a line starting with “error: ” if the test failed. If you want to be able to see the successes as well, start them with “note: “. Keep a counter of failed tests (e.g. in a global). Use the number as the app’s return value of your main().

2) Add a “Run Shell Script” build phase to this target, at the very end. Set it to run ${TARGET_BUILD_DIR}/${PRODUCT_NAME}. Yes, that’s right, we make it build the unit test app, then immediately run it. Xcode will see the “error: ” and “note: ” lines and format them correctly, including making the build fail.

3) Optionally, if you want these tests to run with every build, make that command line tool target a dependency of your main app, so it runs before every build. Otherwise, just make sure your build tester regularly builds this test target.

4) Add a preprocessor switch to the tests that lets you change all “error:” lines into “warning:” instead. Otherwise, when a test fails, you won’t be able to run it in the debugger to see what’s actually going wrong.

Cocoa: String comparisons and the optimizer

Woman in front of a mirrorA while ago, a friend came to me with this bit of code:

NSString *a = @"X";
NSString *b = @"X";
if( a == b )
{
    NSLog(@"Same!");
}

“How come it works with the == operator? Didn’t you have to call isEqualToString: in the old days?”

Before we answer his question, let’s go into what he implicitly already knew:

Why wouldn’t == work on objects?

By default, C compares two pointers by simply comparing the addresses. That is logical, fast, and useful. However, it is also a little annoying with strings, arrays and other collections, because you may have two collections that still contain identical objects.

If you have the phone books from 2013 and 2014, do you just want to compare the numbers 2013 and 2014 and be told: “No that’s not the same phone book”, or are you actually interested in whether their contents are different? If nobody’s phone book entry changed in a particular city, wouldn’t you want to know that and save yourself the trip to the phone company to pick up a new phone book?

Since all Objective-C objects are pointers, the only way to do more than compare the addresses needs some special syntax. So NSString offers the isEqualToString: method, which, if the pointers do not match, goes on to check their contents. It compares each character to the same position in the second string to find out whether even though they’re not the same slip of paper, they at least have the same writing on it.

So why does the code above think they’re the same?

After all that, why does the code above think they are the same object after all? Doesn’t a point to the @"X" in the first line, b to the @"X" in the second line?

That is what is conceptually true, what a naïve compiler would do. However, most compilers these days are smart. Compilers know that a string constant can never change. And they see that the contents of both string objects pointed to by a and b are the same. So they just create one constant object to save memory, and make both point to the same object.

There is no difference for your program’s functionality. You get an immutable string containing “X”.

However, note that there is no guarantee that this will happen. Some compilers perform this optimization for identical strings in one file, but not across files. Others are smarter, and give you the same object even across source files in the same project. On some platforms, the string class keeps track of all strings it has created so far, and if you ask for one it already did, gives you that, to save RAM. In most, if you load a dynamic library (like a framework) it gets its own copy of each string, because the compiler can not know whether the surrounding application actually already has that string (it might be loaded into any arbitrary app.

Mutability is important

This is a neat trick compilers use that works only with immutable objects, like NSString or NSDictionary. It does not work with NSMutableString. Why? Because if it put the same mutable string into a and b, and you call appendString: on a, b would change as well. And of course we wouldn’t want to change our program’s behaviour that way.

For the same reason, NSString may be optimized so that copy is implemented like this:

-(id) copy
{
    return [self retain];
}

That’s right. It gives you the same object, just with the reference count bumped, because you can’t change this string once it has been created. From the outside it looks the same. Copy gives you the object with its retain count bumped, so you can release it safely once you’re done with it. It behaves just like a copy. The only hint you may have that this happened is that instead of an NSString with reference count owned solely by you, you get one with a reference count of 2 whose ownership you share with another object. But that’s what shared ownership is about after all.

Of course, this optimization doesn’t work with NSMutableString.

What I take away from this

So if someone walks up to you and shows you code that uses the == operator where it should really be checking for content equality, and argues that “it works, so it is correct”, now you’ll know why it just happens to work:

It’s a fluke, and if Apple decides to switch compilers or finds a better way to optimize performance or memory usage that requires them to no longer perform this optimization, they might just remove it, and this code will break, because it relied on a side effect. And we don’t want our code to break.

The universal catch-all singleton

Personified application delegate creating objects by cracking eggsOne bad habit I see time and time again is filling up your application delegate with junk that has no business being in there.

Avoid putting stuff in your app delegate.

What is the application delegate?

By definition, the application delegate is a controller. Most model classes are standard Apple classes. Those that aren’t are slightly smarter collections of these classes. Most view classes just display the model and forward a few IBActions to the controller in a dynamic way, so are inherently reusable as well (even if not always the particular arrangement of their instances).

The controllers, on the other hand, aren’t really reusable. They glue all this stuff together. They’re what makes your application your application, along maybe with a few bindings. So, again by definition, the application delegate is the least reusable part of an application. It’s the part that kicks everything off, creates all the other controllers and has them load the model and views.

Reusability is best!

The whole point behind OOP was to reduce bugs, speed up development, and help structure your code by keeping it grouped in reusable components. The best way to maintain this separation of components and permit re-using parts of it in different projects, is to keep the boundaries between components (e.g. objects) clean and distinct.

Objects have a clear hierarchy. The top creates objects lower down and gives it the information they need to operate correctly for your application. Nobody reaches up the hierarchy, except maybe to notify whoever created it of occurrences in the form of delegate messages. That way, the more application-specific your code gets, the fewer other objects know about it. The further down you get, the more reusable.

Moving operations or instance variables that are shared by several objects in your application into the application delegate, and having other objects reach straight up through NSApplication.sharedApplication.delegate to get at it, goes head-on against this desire, and turns your carefully separated code into an inseparable glob of molten sludge. Suddenly *everybody* includes the most application-specific header your application contains.

Don’t lie to yourself

The application delegate is one of the singletons every application contains. Its name is misleading and fuzzy. If you see it as a place to hold code “relevant to the application as a whole”, there is pretty much nothing that is off-topic for it. It is the universal, catch-all singleton.

So why not be honest to yourself: Whenever you add code to the application delegate, and you’re not just doing it to react to a delegate method from NSApplication and create a controller to perform the actual action in response, what you are really doing is create a singleton.

As we all know, singletons have a use, but having many singletons is a code smell. So avoid them if you can, but if you feel you can’t, be honest to yourself and actually make it a separate singleton (or find one whose purpose these operations fit).

Just say no to the application delegate as the universal singleton.

Update: If this hasn’t convinced you, here’s another blogger with a more CoreData-centric discussion of the issue, coming to the same conclusion: Don’t abuse the app delegate.

What a block really is

BlocksLegoBrickAfter quite a while of thinking that Objective-C blocks did some mean magic on the stack, it simply took me seriously using C++’s lambdas (their implementation of the concept) that I realized what blocks are.

Effectively, a block is simply a declaration of a class, plus an instantiation of one instance of that class, hidden under syntactic sugar. Don’t believe me? Well, let’s have a look at C++ lambdas to clear things up:

MyVisitorPattern( [localVariableToCapture]( MyObject* objectToVisit ) { objectToVisit->Print(localVariableToCapture); }, 15 );

The red part is a C++ block. It’s pretty much the same as an Objective-C block, with two differences:

  1. You explicitly specify which local variables to capture in square brackets.
  2. Instead of the ^-operator, you use those square brackets to indicate that this is a block.

Seeing the captured variables specified explicitly listed here, like parameters to a constructor, made me realize that that’s really all that a block is. In-line syntax to declare a subclass of a functor (i.e. an object whose entire purpose is to call a single of its methods), and return you an instance of that class. In ObjC-like pseudo-code, you could rewrite the above statement as:

@interface MYBlockSubClass : NSBlock
{
    int localVariableToCapture;
}

-(id) initWithLocalVar: (int)inLocalVariableToCapture;

-(void) runForObject: (MyObject*)objectToVisit;

@end

@implementation MYBlockSubClass
-(id) initWithLocalVar: (int)inLocalVariableToCapture
{
    self = [super init];
    if( self )
        localVariableToCature = inLocalVariableToCapture;
    return self;
}

-(void) runForObject: (MyObject*)objectToVisit
{
    objectToVisit->Print(localVariableToCapture);
}
@end

and at the actual call site:

MyVisitorPattern( [[MYBlockSubClass alloc] initWithLocalVar: localVariableToCapture], 15 );

The difference is that C++ (and even more so Objective-C) automatically declare the class for you, create the instance variables and constructor for the variables you want to capture, pick a unique class name (which you can see in the stack backtraces if you stop the debugger inside a block) and instantiate the class all in a single line of code.

So there you see it, blocks aren’t really black magic, they’re 99% syntactic sugar. Delicious, readability-enhancing syntactic sugar. Mmmmmh…

PS – Of course I’m simplifying. Objective-C blocks are actually Objective-C objects created on the stack, which you usually can’t do in plain Objective-C, though it can be done with some clever C code if you insist.

A more magical approach to blocks

That said, there is a fancier way for a compiler developer to implement blocks that also makes them 100% compatible with regular C functions:

If you implement a function in assembler, you can stick additional data onto the end of a function and calculate an offset between an instruction and the end of the function (e.g. by just filling the end of the function with a bunch of 1-byte No-Ops). This means that if someone duplicates a block, they’ll duplicate this data section as well. So what you can do is declare a struct equivalent to the captured variables, and implement your code with (pseudocode):

void    MyBlock( void )
{
struct CapturedIVars * capturedIVars = NULL;

currentInstruction:
    capturedIVars = pointer_to_current_instruction + ivarsSection-currentInstruction;

    // Block's code goes here.
    
    goto ivarsSectionEnd; // Jump over ivars so we don't try to execute our data.
    
ivarsSection:
    assembler_magic_to_reserve_some_space;
ivarsSectionEnd:
}

Now you can use the capturedIVars pointer to access the data attached to your function, but to any caller, MyBlock is just a plain old function that takes no arguments. But if you look at it from a distance, this is simply an object prefixed with a stub that looks like a function, so our general theory of blocks being just special syntax for objects holds.

I presume this is how Swift implements its blocks, because it really doesn’t distinguish between blocks and functions.