10 entries have been tagged with programming.

Being fair to your competitors

Ollie the Twitterific mascotOne thing that every successful developer will eventually face is competition. When most people think about competition, they think about evil people who want to take your customers away from you, copy your great ideas, and say bad things about you.

That’s nonsense.

In most cases, a competitor is actually a person that is very much like yourself. So much so, in fact, that they chose to develop a similar product, marketed to similar customers. If you talk to one of your competitors, you will find that you share certain interests.

Mind you, I am not claiming that every competitor is exactly like you are. There is often a certain, important, difference. Something, that keeps you from pooling resources and going at the whole problem together. If you weren’t competitors, you might even have become friends. But still, competitors are usually made up of the material that friends are made of, so I firmly believe that one should treat them as such.

I guess the best way to illustrate the ground rules for dealing with competitors is to take an example from real life, and point out what went wrong.

You might have heard about the Twitpocalypse a while ago, an occasion where a number used to uniquely identify a Twitter message became so large, that many Twitter clients had to be revised to avoid problems. The developers of one Twitter client, let’s call it Twitter client A, made a mistake, and users started posting on twitter about the weird issues they were having due to this mistake. The developers quickly started working on a fix.

During the time that it took to fix the issue, the developer of Twitter client B started replying to the messages of users mentioning the bug in Twitter client A, advertising his own product.

In my opinion, this is an abuse of Twitter, and akin to spamming. People were asking about the problem with their existing clients, they were not asking for sales solicitation from a competitor. While it is all right, and even a demonstration of paying attention to your (potential) users, to answer Tweets of users asking generally for a product that does what your product does, or asking about alternatives to one of your competitors, you are essentially intruding on a conversation between other people when you reply to posts about your competitor.

Moreover, it is showing disrespect to your users. You wouldn’t just walk up to a stranger in the street and tell them that their car is crap and that they should buy a new car from you. And you would consider it slimy if a used-car salesman walked up to you when you had a flat tire, to tell you that they have a new car on offer.

Since a developer, even if they’re trying to be fair, is generally biased, they should just have the class to let their competitors (remember, they’re like you) deal with their problems in peace and quiet. Let them have conversations with their customers and don’t intrude. Stay in the public arena, and don’t drag an imaginary “fight” to their doorstep.

It is perfectly fine to highlight the features in your product that your competitor doesn’t have, or even does badly. But there’s no need to bash them. This is your advertising space, why even mention your competitor? Nobody likes the guy who, when his mistakes are pointed out, points out the mistakes of others defensively. That only reeks of desperation. When users come to your web site, they want to know what your program will do to help them.

By virtue of pointing out your features, users will think of those aspects when looking at competing products. Hey, it’s happened to me:

One competitor to one app I was doing pointed out a really trivial feature: Window zooming. I was confused, why would one even talk about this? Then I got a sinking feeling. Didn’t I recently change something in a related area recently? I looked into it, and sure enough, I had broken window zooming without even realizing it.

That was clever. Take advantage of what your app does better, but be fair.

Defensive Coding in Objective-C

When programming in a C-descended language like Objective C, there are many things that can easily go wrong. To avoid the worst of these errors, programmers have come up with various coding conventions that make it harder to cause such bugs. We’re not talking about indentation or spacing, but rather about “mini-patterns” that ensure certain errors are caught more easily. Here’s my spontaneous, certainly not exhaustive list:

Autorelease Early, Autorelease Often

When you allocate a new object that doesn’t immediately go into an instance variable, it is easy to forget to release that object and leak it. Even if you remember to call -release on it at the end of your method, someone might later add a return statement somewhere, and overlook there’s an object in need of releasing.

One way to fix this is to use goto and a bail: label to cause all exits from your method to go through one funnel point that releases everything again. Kind of a “dealloc method for your method”. goto is not inherently bad (that’s just a rumour brought about by a misinterpretation of the title of a paper by Mr. Dijkstra). That said, the code quickly becomes hairy if you have many different error exits from your method.

An easier way to fix this is to just remember to -autorelease the object right after you create it. That way, at the moment of creation, where it is glaringly obvious there’s an object in need of later cleanup (based on alloc/init or copy in the name), you already ensure you’re not leaking. If someone needs it later, they can always retain it explicitly. Leak-free code for free, and even for people with the attention span of a goldfish (Or poisson rouge, as my favorite leak-hunting colleague would say).

NIL Everything That Isn’t Bolted Down

The problem with C is that local variables are not initialized to zero. Nor are pointers to released objects cleared to nil. No, local variables contain arbitrary numbers that happened to be on the stack when they were created, and variables valiantly keep pointing at the spot that used to house your NSString long after you’ve released it. “There! Look! There’s a big bunch of nothing here that seems to be an NSString!”

A good way to avoid spending hours trying to track down dangling pointers is to set them to nil whenever they contain nothing. Every time you declare a pointer like

	NSString* myString;

stop yourself and instead initialize it properly

	NSString* myString = nil;

You’ll be grateful you did that the moment someone adds an if statement around a few lines that used to assign a value to this variable.

The same applies to objects that you dispose of. The moment you dispose of an object, set the variable that used to point to it to nil (be it an instance variable, a global, or just a local one). Again, in a complex function, someone might insert an if statement that releases your object, and miss that under certain conditions, the code you wrote still tries to access that object. When you later debug that code, nil will make it obvious the object is gone. On the other hand, any old pointer, probably still pointing at valid-looking remnants of the object that used to be there, will not obviously be invalid to you.

I’ve defined myself a DESTROY() macro like GNUstep has it to help with this. DESTROY() first releases an object, then sets its variable to nil. But I only write DESTROY(myVar);.

Don’t Use Accessors in Constructors or Destructors

This may sound a bit odd, but there is a reason to this madness. Constructors (i.e. -init methods in ObjC) and destructors (i.e. -dealloc or -finalize) are special methods: They are called before your object has fully been initialized, or may be called after it has already been partially torn down.

If someone subclasses your class, your object is still an object of that subclass. So, by the time your -dealloc method is called, the subclass has already been asked to do its -dealloc, and most of the instance variables are gone. If you now call an accessor, and that accessor does anything more than change the instance variable (e.g. send out notifications to interested parties), it might pass a pointer to its half-destructed zombie self to those other objects, or make decisions based on half-valid object state. The same applies to the constructor, but of course in reverse.

Now, some people say that accessors should not be doing anything but change instance variables, but that is utter nonsense. If that was all they’re supposed to do, we wouldn’t need them. Accessors are supposed to maintain encapsulation. They’re supposed to insulate you from the internal details of how a particular object does its work, so you can easily revise the object to work in a completely different way, without anyone on the outside noticing. If an accessor could only change an instance variable, you would have very limited means to change this internal implementation.

Moreover, I don’t think Apple would have introduced Key-Value-Coding and Key-Value-Observing if they didn’t agree at least partially that it’s fine to do a bunch of things in response to an accessor being used to mutate an object.

Mind you, all of this only applies to accessors being called on self from your constructor. If you’re setting up another object, you essentially have no choice but to use its accessors, and it would very often violate encapsulation if you did otherwise.

In Fact, Don’t Do Anything Big in Constructors and Destructors

The above rule can actually be made more generic: Whenever you do anything in a constructor or a destructor, try to think whether you really need to do it here and now. They’re mainly there to manage your instance variables. If you have to register for notifications or otherwise access external objects, it’s always safer to do it elsewhere, when you can be sure that your object has been completely constructed.

A neat trick in constructors is to call -performSelector:withObject:afterDelay:0 on yourself. This will ensure a method to initialize stuff gets called on your object the next time through the event loop. Of course, for many objects that opens yet another can of worms (imagine you’d just created an NSScanner and had to wait for the event loop to run once before you can use it!).

Another thing that sometimes works is to access external objects lazily. E.g. the first time someone calls any of the -scanXXX methods on an NSScanner, it could transparently and implicitly do some more involved setup and set a flag that this setup has happened.

I have a similar recommendation for destructors: You should try to close files or relinquish external resources explicitly, before your object is released, if you can. There’s nothing wrong in having code in your constructor that makes sure of this as well (i.e. to avoid leaking open file descriptors), but it is desirable to have that as a fallback, not as the preferred API.

Now, before you go all “goto considered harmful” on me: I’m not saying doing worthwhile things in constructors in destructors was bad. Rather, all I’m saying is that other options for good places to do it are often overlooked. Both because the whole matter of half-constructed/destructed objects can get hairy, and also because anyone else can retain your object and thus prevent your resource from going away.

If the object is by itself the resource, that is exactly what you want. It is what retain/release was designed for, after all. But if the resource simply represents a file or a hardware device, and someone deletes the file or unplugs the device, you must be able to cope gracefully with your object still existing because some nit retains it, even though the actual resource is gone.

And if you want to call methods that a subclass would want to override (and in Objective-C, there is no such thing as a “method that can not be overridden”, by design!), you’d prefer to have a fully-initialized object ready.

Follow Apple’s Singleton Design Pattern

There is a nice little example implementation of the Singleton design pattern on Apple’s developer web site. Implement it.

While I think the -retain/-release methods should actually be left alone so you get some decent crashes and notice when someone retains/releases a singleton the wrong number of times (retaining or releasing should be allowed on any object, even if just to make it easy to keep certain code agnostic of the precise type it’s dealing with, so we can’t just make it throw an exception), they got a lot of details right:

They don’t wait for -init to return to set the global singleton variable. After all, singletons can be subclassed, too (such a subclass usually gets instantiated instead of the superclass, as just like on Highlander, there can only be one). If any of the -init methods do anything that might trigger code that might in turn call your +sharedManager method (like, I don’t know, register for IOKit notifications and send NSNotifications when they come in), this would invite endless recursion. Since the singleton global hasn’t been set yet, that second call would create a second singleton instance, which would in turn trigger the notifications, which would in turn create a third singleton … and so on.

What Apple’s code does is to cleverly override +alloc to set the global variable. That way, it is already set before anyone ever gets around to doing anything with the object. They also have a lock on the class. So, even in a multi-threaded implementation, they only allocate the object once. Since they return nil on subsequent attempts to alloc the object, they also only ever allocate and init one object.

It’s a very solid implementation, and whenever I’ve taken a shortcut on this in the past (and the code on my site will show you I have been doing this this until fairly recently), it’s caused me pain. I’m glad I finally understand it now.

Clear Your Weak References

One convention in Cocoa is that you don’t retain your delegate. This kind of “weak” reference from the delegator to the delegate may seem dangerous at first, but makes complete sense in the common use case:

Usually, a controller object creates another object and retains it, and sets itself up as that object’s delegate to be able to modify or benefit from its behaviour. For example, an NSWindowController creates an NSWindow, becomes its owner by retaining it, and makes itself the delegate of that window.

Now, if the NSWindow retained its delegate, if it retained the NSWindowController, we would have a retain circle: When the NSWindowController is released by the last external party, it would still have a retain count of 1, because the NSWindow would have its delegate retained. However, the NSWindow would also have a retain count of 1, because the NSWindowController created the NSWindow and kept it retained. So both are waiting for the other to release them. Only then would their -dealloc methods get called, which would release the other one. They’d be like two lovers lost in space, separate from everyone else, but closely holding on to each other.

So, the rule was laid down: You don’t retain your delegate, as the delegate probably already owns you. But what happens if someone else retains the object you own? Your NSWindowController is released, it relinquishes its hold on the NSWindow by releasing it. But that other guy still has the NSWindow retained, so it stays open. Someone clicks your window and a delegate method is called.

Wait a second! The NSWindowController was the delegate! But it is gone!? Well, unless the NSWindowController was considerate enough to tell the NSWindow, by calling its -setDelegate: method and setting it to nil, NSWindow wouldn’t know. It would find itself yelling at a dead object, probably crashing.

So what have we learned? Unless you’re a fan of zombies, you’ll appreciate setting any weak references to yourself to nil in your -dealloc method.

In case you’re wondering who might be mad enough to retain your objects, look no farther than the deferred method call mechanisms, particularly NSTimer, NSThread, NSInvocation and the -performSelector:... family of methods that eventually end up with your NSRunLoop.

Of course, you can go and invalidate the timer, cancel the -performSelector:s, and in many cases you well should, but in other cases, you may actually want all of these operations to be performed on the object before it goes away (though maybe not in our example of an NSWindow). And of course this isn’t really a good example, because a good design would probably not install timers and the likes on objects but themselves (that usually violates encapsulation, after all). Then again, with NSInvocation you have no choice.

Use symbolic constants

Cocoa and Foundation make use of string constants for identification purposes a lot. Notification names, keys in NSDictionary objects. You also use them elsewhere, to refer to files on disk, processes using their bundle identifier etc.

Now, everyone knows that defining a string constant as a symbolic constant using #define or by defining it as a variable at global scope makes it easier to change this string later. Particularly if the string is used in several places. But often, people “know” that this constant will never need to be changed, so they just hard-code it. Bad idea. There are other advantages to a symbolic constant:

The Compiler knows about symbolic constants.

That is right. That means that, should you mistype the symbolic constant, the compiler will only see an unknown identifier. If you mistype a regular string constant, all the compiler sees is a string. A compiler has no idea that “MyPrettyColor” and “MyPrettyColour” are supposed to be the same thing, but one of them is obviously a mis-spelling. If you had defined a symbolic constant like

#define MyPrettyColour    "MyPrettyColour"

It would compile to the exact same code as using the pure string constant, but if you mistype MyPrettyColour as MyPrettyColor, the compiler would immediately tell you about that and you wouldn’t wonder why a dictionary value always gets returned as nil even though you know for certain you put it in the dictionary.

This applies similarly to any other kind of constant, be it an int, a double or whatever. It’s easy to hit 111111 when you meant to write 11111, and that excess digit is not always easily noticed, as our mind tends to “correct” what we see as it tries to make sense of it. If you define a symbolic constant, the compiler will catch any additional letter you type by accident. Even better, you can define the constant correctly. Tend to forget the U at the end of unsigned numbers? The constant will always contain it, you only have to think of it once. If you forgot it, you can simply add it, and all other spots that use the constant are magically fixed.

And last but not least, symbolic constants can improve readability. Imagine drawing code where you deal with margins, line widths etc. Now in one spot you draw a button, and in another you hit-test it. To do hit-testing, you inset your rectangle. What do you think is more maintainable?

buttonHotRect = NSInsetRect( [self bounds], 67, 42 );


buttonHotRect = NSInsetRect( [self bounds],
                             MyLeftMarginWidth + MyLineWidth + MyLineWidth
                             + MyRightShadowWidth + MyRightMarginWidth,
                             MyTopMarginHeight + MyLineHeight + MyLineHeight
                             +MyBottomShadowHeight + MyBottomMarginHeight );

If you ever change the drawing, how likely is that you’ll recall what separate numbers 67 or 42 were made up of? Any compiler worth its salt will fold the numeric constants and thus generate the same code for both of them. There is no reason to not go for readable code.

Closing words

Reading this, you may think I should just stop writing thoughtless or bad code instead of doing things like these to mask the issues. But the matter of the fact is: Everyone has a bad day, everyone makes a mistake. Particularly when you program in teams and you’re programming all week long and there are deadlines when you have to ship, the likelihood of mistakes increases.

And even if you’re not working in a team, remember the Zarra description of programming alone: You’re programming in a team of three people. Past You, who was a moron, Present You, who is average, and Future You, who is a genius. You’re already being annoyed by Past You‘s lack of skill, you don’t want to make it any harder on Future You.

Following the rules in this article will make many bugs more obvious while you’re debugging them, and will prevent many crashes from happening in the first place.

So, do you guys have any neat coding tricks to share that I forgot to mention?

Generating Machine Code at Runtime

Okay, so my next attempt at learning how my computer works and how to speak machine language is the following C code fragment:

typedef int (*FuncPtr)();

// Create a function:
char            testFunc[] = { 0x90,                         // NOP (not really necessary...)
                               0xB8, 0x10, 0x00, 0x00, 0x00, // MOVL $16,%eax
                               0xC3 };                       // RET

// Make a copy on the heap, OS doesn't like executing the stack:
FuncPtr         testFuncPtr = (FuncPtr) malloc(7);
memmove( (void*) testFuncPtr, testFunc, 7 );

printf("Before function.\n");
int result = (*testFuncPtr)();
printf("Result %d\n", result);

Basically, this stores the raw opcodes of a function in an array of chars. The first byte of each line is usually the opcode, i.e. 0x90 is No-Op, 0xB8 is a MOVL into the eax register (with the next 4 bytes being the number to store, in this case 16), and 0xC3 is the return instruction (I had to look up the opcodes in Intel’s documentation).

One thing to watch out for here (at least on Mac OS X), is that you’ll get a bad access error if you try to execute testFunc directly. That’s because testFunc is on the stack, and the stack shouldn’t contain executable code (it’s a small safety measure). So, what we do is we simply malloc some memory on the heap, and stuff our code in there.

You may wonder why I’m using eax of all registers to store my number 16 in. Easy: Because the convention is that an int return value (and most other 4-byte return values) goes in eax when a function returns. So, what this does is it essentially returns 16. Which our printf() proves. Neat!

Intel’s documentation describes the opcodes in a very complicated way, so what I essentially do is I write some assembler code and enclose the instruction whose byte sequence I want to find out in instructions whose byte sequence I already know (I like to use six nops, which are short and show up as 0x90 90 90 90 90 90). Then I compile that, and then use a hex editor to search for the known instructions, and whatever is between them must be my new one. Here’s a small table of other operations you may find in the typical program and what byte sequences they turn to:

0x50 pushl %eax
0x53 pushl %ebx
0x55 pushl %ebp
0x89 E5 movl %esp, %ebp
0x90 nop
0xB8 NN NN NN NN movl $N, %eax
0x68 NN NN NN NN pushl $N
0xE8 NN NN NN NN call relativeOffsetNFromEndOfInstruction
0x8B 1C 24 movl (%esp), %ebx
0x8D 83 NN NN NN NN leal relativeOffsetToData(%ebx), %eax
0x8D 85 NN NN NN NN leal relativeOffsetToData(%ebp), %eax
0x5B popl %ebx
0x83 C4 NN addl $NN,%esp
0x83 EC NN subl $NN,%esp
0x8B 00 movl (%eax), %eax
0x89 45 NN movl %eax, NN(%ebp)
0xC9 leave
0xC3 ret

The code fragment above is essentially what one would need to create a just-in-time compiler. For a real compiler, instead of executing this directly, we’d have to write it to a complete MachO file and link it with crt1.o.

Update: on top of the instructions for position-independent code (PIC), I’ve also added some more useful in passing structs as parameters on the stack.

Intel assembler on Mac OS X

I’ve always wanted to learn another assembler, and with one of my colleagues being a real assembler guru, and the Intel reference books on my bookshelf, and the Intel switch just behind us, I thought this would be a good opportunity to finally get going with x86 assembler.

Now, assembler programming under Mac OS X isn’t quite as well documented as one would wish. There’s no tutorial that I could find (lots of tutorials for Linux and Windows, but none for Mac OS X yet). This won’t be one either, but rather this is a blog posting of me sharing what I found out about assembler on OS X, and is probably only useful to someone who already knows some assembler, but just doesn’t know Intel on Mac OS X. My main approach is to compile C source code into assembler source files using GCC. Then I can look at that code and find out what assembler instructions correspond to what C command. If all of this turns out to be correct and I should happen to have loads of time on my hand, I may still go out there and turn this into a decent tutorial.

The basics are pretty simple

	.text						# start of code indicator.
.globl _main					# make the main function visible to the outside.
_main:							# actually label this spot as the start of our main function.
	pushl	%ebp				# save the base pointer to the stack.
	movl	%esp, %ebp			# put the previous stack pointer into the base pointer.
	subl	$8, %esp			# Balance the stack onto a 16-byte boundary.
	movl	$0, %eax			# Stuff 0 into EAX, which is where result values go.
	leave						# leave cleans up base and stack pointers again.
	ret							# returns to whoever called us.

Now, the underscore in front of “main” is a convention in C, so just accept it. When you enter the _main function, the return address (i.e. the instruction where the program will continue after the function has finished, aka “back pointer”) has already been pushed on the stack, taking up 4 bytes. We also save the base pointer (the point where our caller can find its parameters on the stack) to the stack, and set it to the current stack pointer (which is where our parameters are). That takes another 4 bytes, so we have 8 bytes now. Since the stack should be aligned on 16 bytes before you can make a call to another function, we subtract another 8 from the stack pointer, which pads out the stack (we could also just do two “pushl $0” for the same effect). If we used any local variables, we would use this opportunity to subtract more for them.

Now comes the actual body of our function. What we do is simply return 0. This is done by stuffing 0 in the eax register.

Finally, we have the tail end of our function, which calls leave (which cleans up by restoring our caller’s base pointer and stack pointer) and then call ret, which pops the return address off the stack and continues execution there.

Calling a local function

Calling a function is fairly simple, as long as it’s a local one right in the same file as ours. In that case, what you do is you first declare that function:

.globl _doSomething				# Our doSomething function.
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	nop							# does nothing.
.globl _main
	pushl	%ebp
	movl	%esp, %ebp
	subl	$24, %esp			# 8 to align, 16 for our 4-byte parameter and padding.
	movl	$3, (%esp)			# write our parameter at the end of the stack (i.e. padding goes first).
	call	_doSomething		# call doSomething.
	movl	$0, %eax

“nop” is a do-nothing instruction I just inserted here to show where doSomething’s code would go. That’s pretty easy. You just write the function, push the parameters on the stack and use call to jump to the function, and that will take care of pushing the return address and all that. The only tricky thing is passing the parameters. You have to pad first, and then push (or mov, in our case) the parameters in reverse order (i.e. #1 is at the bottom of the stack, #2 above it etc.). That’s because otherwise the function being called would have to skip the padding. Well, could be worse.

Accessing parameters

To acess any parameters, you address relative to the base pointer. The value immediately at the base pointer is generally your caller’s base pointer and the return address, so you need to add 4 + 4 = 8 bytes. Yes, since the stack starts at the end of memory and grows towards the beginning, and you subtract from the stack pointer to make it larger, you need to add to the stack pointer to find something on the stack. The same applies to our base pointer, of course:

	movl	12(%ebp), %eax	# get parameter 2 at offset 4 + 4 + 4
	addl	8(%ebp), %eax	# get parameter 1 at offset 4 + 4

Would store your second parameter in eax and then add the first parameter to it, leaving the result in eax, where it’s ready for use as a return value. Note the ##(foo) syntax, which adds the number ## to the pointer foo. This is register-relative addressing.

An added benefit of this is that you can actually pass more parameters to a function than it knows to handle, and it will just ignore the rest.

Fetching data

To access data (e.g. strings), it gets trickier. You declare data like the following:

	.ascii "Hello World!\0"
.globl _main
. . .

So, you add a .cstring section at the top of the function, and in that you declare a label and use the .ascii keyword to actually stash your string there. So far, so good, there’s only one problem:

All data manipulation is done using absolute addresses. But we don’t know at what position in memory our program will be loaded. Labels aren’t absolute addresses, they get compiled into relative offsets from the start of our code. So, how do we find out at which absolute address our string myHelloWorld is? Well, the trick MachO uses is that it knows that our program will be loaded as one huge chunk. So, we know that the distance between any of our instructions in the code will always stay at the same distance to our string.

So, if we could only get the address of one instruction in our code that has a label, we could calculate the absolute address of our string from that. Now, look above, at our function call code. Notice anything? Our return address is an absolute pointer to the next instruction after a function call. So, all we need to do to get our address is call a function. When you assemble C source code, they call this helper function ___i686.get_pc_thunk.bx, which is quite a mouthful. Let’s just call it _nextInstructionAddress:

. . .
	call	_nextInstructionAddress
. . .

That’s what we call somewhere at the start of our code to find our own address. Note how I cleverly already added a label myAnchorPoint, which labels the instruction whose address we’ll get. Then we somewhere (e.g. at the bottom) define that function:

. . .
	movl	(%esp), %ebx

We don’t even bother aligning the stack or changing and restoring the base pointer. This simply peeks at the last item on the stack (the return address) and stashes that in register ebx. Then it returns (and obviously doesn’t call leave because we pushed no base pointer that it could restore).

Once we have this address in ebx, we can do the following to get our string’s address into a register, and from there onto the stack:

. . .
	leal	myHelloWorld-myAnchorPoint(%ebx), %eax
	movl	%eax, (%esp)
. . .

LEA means “Load Effective Address”, i.e. take an address and stash it into a register. myHelloWorld-myAnchorPoint calculates the difference between our two labels, and thus tells us how far myHelloWorld is from myAnchorPoint. Since myHelloWorld is probably at the start of the program, e.g. at address 3 maybe, and myAnchorPoint further down, say at address 20, what we get is a negative value, e.g. -17. And xxx(%ebx) is how you tell the assembler that you want to add an offset to a register to get a memory address. ebx contains the address of myAnchorPoint, so what this does is subtract 17 from myAnchorPoint’s absolute address, giving us the absolute address of myHelloWorld! Whooo! And this mess is called “position-independent code”.

Now, our call to LEAL loads a “Long” (which is 32 bits, i.e. the size of a pointer on a 32-bit CPU) and stashes it into register eax. And the movl call moves that long from our register into the last item on the stack, ready for use as a parameter to a function.

Calling a system function

Now, it’d be really nice if we could printf() or something, right? Well, trouble is, we don’t know the address of printf(). But this time it’s actually easy. We add a new section at the bottom of our code:

. . .
	.section __IMPORT,__jump_table,symbol_stubs,self_modifying_code+pure_instructions,5
	.indirect_symbol _printf
	hlt ; hlt ; hlt ; hlt ; hlt
	.indirect_symbol _getchar
	hlt ; hlt ; hlt ; hlt ; hlt

This is a new section named __IMPORT,__jump_table. It has the type symbols_stubs and the attributes self_modifying_code and pure_instructions. 5 is the size of the stub, and intentionally is the same as the number of hlt statements below.

This section is special, because when our code is loaded, the loader will look at it. It will see that there is an .indirect_symbol directive for a function named “printf”, and will look up that function. Then it will replace the five hlt instructions, each of which is one byte in size, with an instruction to jump to that address (hence the self_modifying_code). We also added a label for each indirect symbol, which we name the same as the symbol, just with “_stub” appended.

So, to call printf, all you have to do now is push the string on the stack and then

	call	_printf_stub

Which will jump to _printf_stub and immediately continue to printf itself. And just to show you that you can have several such imported symbols, I’ve also included a stub for getchar. Now note that the system usually doesn’t name these symbols “_foo_stub”, but rather “L_foo$stub” (yes, a label name can contain dollar signs. You can even put the label in quotes and have spaces in it…). Same difference.

Okay, so that’s how much I’ve guessed my way through it so far. Comments? Corrections? If you want

PS – Thanks to John Kohr, Alexandre Colucci, Jonas Maebe, Eric Albert and Jordan Krushen, all of which helped me figure this out one way or the other. Thanks, guys!

Update: Added mention of how to actually access parameters.

Debugging Assembler on Mac OS X

The thing a programmer probably does most is, unsurprisingly, debugging. Not that programmers necessarily love debugging, but if you don’t have a high pain tolerance for debugging, you probably don’t want to pursue a career in programming. On the other hand, if you like the challenge of the bug hunt, you should try getting into this biz. Or into exterminating. Whatever makes you happy, man.

Anyway, my recent posting on Intel Assembly Language Programming on Mac OS X kinda left you hanging in the air on this one. I didn’t say anything about debugging. Why? Because, honestly, I hadn’t got that far yet. Of course, the first bug didn’t leave me waiting for long, so here’s some handy tools if you want to debug your assembler program.

First, you need to compile your assembler source files with GCC using the -g option. That will give you debug symbols, which means the debugger will show you each line. Once you’ve done that, you just launch GDB, on the command line, as usual:

% gdb path/to/your/executable

You’ll get the GDB prompt you may have already seen in Xcode’s debugger console. Type in

(gdb) start

and it will jump to the start of your main function. GDB will always print the next line, and you can use the step command to execute it and see the next one. Of course, you may want to see what is in a particular register or at a particular memory address. Easy:

(gdb) print/x $eax

will print register eax as hexadecimal (that’s what the ‘/x’ means – there’s also ‘/d’ for decimal, ‘/c’ for character, ‘/s’ for string and ‘/t’ for binary). If you want to view a memory location, you use the ‘x’ command instead.

(gdb) x/1xb $eax

will take the address in $eax, and print 1 byte at that address in hexadecimal. The parts after the slash decode into /<count><displayFormat><type>. displayFormat is the same as the thing after the slash when you print, count is a number indicating how many to print, and type is ‘b’ for byte, ‘h’ for halfword (2 bytes) or ‘w’ for word (4 bytes).

Oh yeah and to get out of gdb again, the command is quit. Happy debugging!

Update: I recently realized I’d omitted two important little tricks from this description: If you don’t have debug symbols, you can still step through code. The relevant commands are

(gdb) si

which steps through code by one instruction (this even works with system functions etc.) and

(gdb) p/i

which disassembles and prints the current instruction.

Nice Intel assembler text…

[two of Intel's instruction set manuals]

I’ve recently been looking into assembler coding a little. I learned assembler theory back in High School in Mr. Trapp’s computer programming elective, and later learned a bit of 68000 assembler as well, but never got round to actually getting into it when the PPC arrived on the scene. So, when I recently heard at work how one can get a whole bunch of Intel reference books for free, I thought this might be a good opportunity to learn x86 assembler. After all, I’m a parser and compiler geek, it’s kind of a gap in my skill set if I can’t do the backend.

Now, trouble is, while there are many tutorials for Linux and Windows, I couldn’t find a single one for Mac OS X. So, I started googling, assembling C code and bothering some developers I know and others on mailing lists with my questions, and I thought I’d share my first findings:

  • I got a link to Apple’s Mac OS X ABI docs. This is really good, as it documents an important part on OS X in detail: How to align the stack (on 16 bytes, no matter what Intel’s docs tell you), and how to call your own functions.
  • Aforementioned 16-byte stack alignment is not always necessary, but when you call a function, you must give it a properly aligned stack. When you are called, however, the stack will have the return address on it, which is 4 bytes. So, after you push the base pointer on the stack (4 more bytes), you have to move the stack pointer by another 8 bytes at least to make it aligned on a 16-byte boundary again.
  • A nice way to learn assembler is by writing very simple C programs and using gcc -S my_simple_c_program.c to get it translated into assembler code. Note that by simple, I recommend you start out with stuff that doesn’t use any system functions, because those are dynamically linked and make for rather complex assembler.
  • To compile such a program, simply pass it to GCC again, as you would with a C source file. E.g. gcc my_simple_c_program.s -o my_simple_c_program

This might be a good point to mention my Memory Management chapter in the Masters of the Void C tutorial again, which illustrates how memory works. As I learn more, I may post supplements to that that slowly teach you assembler. Well, I’m not promising anything, but I’d love to do that.

Category or Delegate?

One of Objective C’s nicer features is the “Category”. A Category is simply a way to add your own methods to existing classes. And the best part of it: You don’t even need to have the source code to that class.

If you have some code for adding all those backslashes to a string so you can pass it to a command-line tool, then you can just put it into an NSString-category and immediately every NSString in your program understands this new method.

There’s just one problem with this: It’s very easy to mis-use. Since Object-oriented programming was invented, people seem to be oddly reluctant to create a new class. Also, many people have problems deciding when an object should e.g. be a dictionary and when it should only have a dictionary.

Whenever you extend a class (be it one of your own classes or an extension to another class using a category), or subclass, it helps to keep in mind that you want to keep your code easy to maintain. You want to reuse code so that you have to fix each bug only once in a central place and it’s fixed for good, and you want to take advantage of the fact that the less code you write, the less bugs can be in it.

So, to keep your code comprehensible at first read, your first question when extending a class should be: Does what I do still fit the name of the class? E.g. if you’re subclassing NSDictionary, what you end up with should in some way still be a dictionary. If you subclass NSDictionary and you end up with a data source, you should ask yourself whether what you’re doing is really such a good idea.

The second question you should be asking yourself is whether you’re crossing any boundaries. E.g. in above example of a dictionary (part of the “model” layer in the Model-View-Controller pattern) that gets subclassed and becomes a data source (which, strictly spoken, is part of the controller layer, though it’s mainly a “model-controller”), you’re watering down the MVC boundaries that are so useful in synchronizing multiple views with your data or keeping your application portable. It would probably be much better to create a new “DictionaryController” class that can connect to a dictionary. Of course you can give it the optional feature of creating the dictionary if needed (it would have a dictionary, not be one).

The third question should be: Is this really reusable? If you subclass NSMutableDictionary as a data source, what happens if you suddenly have an NSDictionary? The code to view an NSMutableDictionary or an NSDictionary is exactly the same, but your subclass would have to be reimplemented for both of them. No good idea. Better create a controller that can then be given either type of dictionary. Sure, you could use a category on NSDictionary, and NSMutableDictionary would probably inherit that, but there’s still point four below…

An extension of this reuse consideration is the thought of rewriting: If you write a category and immediately realize that for one very common use you’ll probably end up needing you’d have to rewrite it, maybe even as a separate object, then it probably isn’t reusable enough. Categories and objects aren’t very different in effort and lines of code. A category is usually self-contained and reusable, so it should probably have its own file, just like a class. So, if something might work better as a class, just make it one, it doesn’t hurt. And in addition, a separate object can be easily instantiated in a NIB and connected to another object. With a category and an existing object, that’s not so easy.

Categories are also where question four comes in: If you’re creating a category, do you really want to carry those methods around with every object? This is not a peformance consideration, but rather one of complexity and unwanted side effects. After all, it’s perfectly all right to write a classic C function that converts an object to another type if you only need that in two places, and it’s just as reusable to have a separate controller object that does repeated conversion and translation between data types (like NSFormatter). The ground rule is, if you can’t honestly say to yourself that you will use a particular method in a category on another class a dozen times, you probably shouldn’t be adding it.

As you see, there’s no hard and fast rule when to use a category and when not to (well, at least I haven’t managed to formulate it yet). But hopefully my rambling above will give you a checklist to go through mentally when you’re trying to decide whether a category, subclass, or a new class would be better in a particular situation.

If you have any suggestions, or have insights of your own to share, feel free to leave a comment.

Inform 7 (IF Language) is out!

Mr. Stoneship just posted his link to Inform 7, the interactive fiction development language. I don’t know how such a cool program could so quietly creep up on us. It’s a very interesting approach at a GUI with an English-like programming language and lots of new ideas:

There are several views on your game, with a transcript view that can perform something akin to unit tests on your game output, the game is written pretty much as english text, you have a tree view and transcript of actions that can be re-run, and you can switch between a running game and a game being designed and do something like “fix and continue”.

The only trouble is, it doesn’t just have the same name, but also the same file extension as the Inform drawing program… I sent out messages to the authors, hopefully they’ll work something out.

Headaches further Revelations

Sometimes you sit there with a headache and the memory of an article and two e-mails you read two days ago, not being able to do much else, and you suddenly comprehend something you didn’t before. And if the headache isn’t from alcohol, you can actually assume that this revelation will still be valid once you feel human again. Try this one:

There were a couple of postings on Cocoa-Dev by someone trying to use Cocoa APIs using AppleScript Studio’s call method function. Now, if you remember, that function lets people used to not-quite-English statements like

copy (display dialog with prompt "I failed jiggery-pokery but I had A's in hullaballoo") to myVar

turn nice readable Objective C statements like

foo = [[NSFileManager defaultManager] directoryContentsAtPath: @"/Users/"];

into toll-free-bridged unreadabilities like

set foo to (call method "directoryContentsAtPath:" of (call method "defaultManager" of class
"NSFileManager") with parameter "Macintosh HD:Users")

(Note that I inserted the brackets in the AppleScript to improve readability).

Now, when I first read this I thought that guy got what he deserved. If you clicked the link above, you’ll know my thoughts about AppleScript, and this example shows beautifully why you want to do Cocoa development in Objective C. So why doesn’t that guy bite the bullet and switch languages?

Then, today, I wrote an e-mail to Tom Pittman and in the last line gushed a little about how his CompileIt! stack helped me get into programming from within the safety of HyperCard. Now, for those of you who never got to play with HyperCard or CompileIt!, basically CompileIt is like AppleScript Studio’s call method command: It lets you call all the system APIs as if they were HyperTalk handlers and functions. So to have Quickdraw draw a rectangle, you’d write something like:

put newPtr(8) into myRect -- 8 bytes, for 4 two-byte shorts
setRect myRect,10,10,100,100
frameRect myRect
disposPtr myRect -- yes, no "e" - those were the System 6 symbol names.

Of course, you could have just done this in plain HyperTalk, by scripting the user’s actions with the drawing tools, like:

choose rect tool
drag from 10,10 to 100,100

but play along there, will you? Well, anyway, just like with ASS and call method, this allows you to turn a straight affair of English-like code into a bastardised form of some systems-programming language.

So, what’s the benefit? The benefit is a shallower learning curve. With tools like HyperCard or ASS, you can create the basics of an application, all the drudgework, using a much more natural set of metaphors. And when those fail you, you don’t have to start from scratch writing event loops and window-management code. You just learn about those few lower-level Toolbox or Cocoa commands you need and only use those.

The advantage is that you can get to know the frameworks step by step, having small successes each time. By the time you have to learn the actual details of Cocoa programming, you’ll already know most of the commands and conventions, and it won’t feel half as foreign as it would have if you’d jumped straight in.

It doesn’t change my opinion that AppleScript is a bass-ackwards language, and it doesn’t change my opinion that call method‘s main effect is making code unreadable, but at least it makes me frightened of all these AppleScripters that will push into the Mac programming market eventually once they’ve moved to a better-designed language. They had the same basic learning curve I had, and they put up with AS… They’ll smoke me in their pipes… :-)

Installation and Uninstallation on MacOS X

Soeren found an article on easier deinstallation on O’Reilly. The general idea (for the pointy-haired bosses among you who don’t have time to read it) is that every application would get an additional .plist file in its bundle containing the paths of files (like its prefs file) that can be safely deleted along with the application to clean up support files, like the ones in ~/Library/Application Support.

Problems of this approach

My one regular reader may not be surprised to hear that I don’t think the paths idea is a robust enough approach. For one, it would break when the user renames a hard disk. Secondly, some support files’ paths (e.g. Photoshop’s swap files) could be configurable, and then the .plist-info would need to be modified. And it’d be even messier if several users could provide different places to keep these files.

Putting support files in the app bundle itself, as some commenter suggested, has similar problems, especially if the software is on a server and shared between hundreds of users. It’d circumvent users’ disk quota and make permissions for an app bundle unnecessarily complex.

Alternative solution

My suggestion would be to simply add some sort of metadata to such support files and then use the Spotlight index to get rid of such files. Whether it’s done using access control lists, an xattr that simply says: kIsSupportFileOwnedBy = de.zathras.talkingmoose, or they define an OS9-style creator code in your Info.plist that says that all files with this creator are deletable support files, I don’t really care.

If all apps were required to give their support files such attributes, one could even have hard disk clean-up apps that can quickly find any orphaned files, like it was possible on Classic MacOS with creator codes (e.g. FileBuddy checked for files with unknown creators and offered to delete them). It should also be supported by NSDocument without too much work … NSDocument‘s bad support for creator and typecodes is the main reason for bad creator support, along with an ambiguous policy on Apple’s part and the discontinuation of the type/creator registry on their homepage ages ago.

Maybe it should even be a list of owning apps’ bundle IDs in an xattr. That would allow extending this to other kinds of files. E.g. a shared Framework could be auto-deleted when all apps that own it have gone.

Of course, Finder should generally ask whether the admin wants to delete all support files for all users who’ve used this app. After all, they may want to remove the global copy of the app and give two users who still need it local copies when an app is phased out. In that case, they’d want to be able to keep those users’ files.

Other installation/uninstallation improvements

While I’m at it, here’s an idea for making all applications drop-installable, even when they need an installer/uninstaller: Apple could support install and uninstall executables (which could be any of applications, shell scripts or .pkg files) that reside in an application’s bundle in a standardised location:

Users simply copy over the app, and when they first launch it, the Finder runs the installer in it, deploying all needed files. When you delete it, Finder finds the uninstaller and offers to run that.