John Gruber just posted a little piece titled Deal With It. He points out something that annoyed me since System 6's Alarm Clock widget (or was it Calendar?):

Controls like the system standard date and time controls walk the fine line of compromise: They always enforce a valid date and are composed of separate fields that you tab through individually, but when you type the date or time separator, they move to the next field. So, if you go into Date & Time in System Preferences and click the time there and enter 12:34, it will automatically put the minutes into the next field, even though your first click selected the hours.

Stupidly, iCal really implements the components as separate fields and requires you to type the tab key whenever you want to enter the next date component. Type the date separator and it just beeps at you. IMHO that's the main point that makes it annoying.

However, even if a custom control for entering formatted values may require as many or as few keystrokes as a free-form text field, the field will feel more natural. As long as it is clear what you are expected to enter into the field, and how to enter it. This can be done with the little grey "placeholder" text some fields on Mac OS X have these days, but is very hard to get right.

For instance, it's not quite easy to parse any kind of date a user may type into a date & time field. Some of the ambiguity can be resolved using the system's locale-specific date & time settings (quick: 10/09/2007 -- 10th of September or 9th of October?), but coding everything, including "in 3 days", "3 days from now", "next wednesday" and all variations that may exist in a particular language, can be a lot of work and require a good deal of processing. And once you start accepting simple things like "Today" and "Tomorrow", where do you stop? What about "Next Easter" or "The Next Monday that is the 1st of a month"...

And suddenly we're in the area of the Uncanny Valley and the Turing Test: How much can I expect a free-form field to understand? Many people will not even try such complex expressions because no other program understands those. If Apple implemented a consistent free-form date field, users would slowly learn that every application has a field that works, would learn in one that they can do more complex stuff with it, and then would try that in other apps as well. That's how MacOS has always used consistency to the user's benefit.

But still, they'd have to learn the limited language the computer understands. It would just be another command-line. And making a restricted language too close to English has often been the downfall of English-like programming languages: The closer a language is to natural language, the more we tend to only memorize the general meaning. Then, when it comes time to use that command again, we reconstruct the sentence, get one word wrong and have no idea why the stupid computer doesn't understand. We dive nose-first into the uncanny valley.

However, John's posting makes me think that not the graphical user interface, but actually the command line will be the future of user input. Right now, our level of interaction is that of the typical European in rural China: Point at two things, and hope the other understands. Once dictation matures, this can change to a degree. We can enter commands by voice, like speech recognition already allows to a degree, but we'd also have to be able to dictate new data, names, addresses, whatever, to make this feasible.

This would not replace the GUI with that speech-controlled command-line, but augment it. Right now, we're at the verge: NSDateFormatter already understands a lot. But you still have to know (or test) its limits to use it. But luckily, our users are getting more proficient. Somewhere in the middle, our coding skills and users will meet, and voice-control will be an everyday occurrence. Until then, we'll have to cope with The Stupid Computer(tm) and its stubborn misunderstandings...

But yes, the future is the command-line, not the WIMP-interface.

Update: I've had the time to read the complete Three Hypotheses of Human Interface Design-article that John Gruber referenced in excerpts. Maybe I'm not getting it, or maybe it's the equations that are actually important, but I think in general most of the things written there can be found in other HCI books:

Measuring the number of actions and weighting them is a very common task in usability research. For example, Jef Raskin's book The Humane Interface provides some examples of this in one of the early chapters, so the readers at least have the basics down before he starts with his actual theses.

Similarly, cognitive load, and how much more important it is than many people think is covered in Steve Krug's Don't Make Me Think, where he also points out how people are always spending about 80% of brain power on what they actually want to achieve, and only 20% on actually using your application.

So, no, I don't think the author actually invented it (well, it may be a double-creation, but like everything, it's been there before), but his article is a good starting point into the subject matter, IMHO.