Usability

There are 28 entries in this Category.

Myopic version-control islands

VersionControlIslands

Being a programmer, I use version control software a lot. A while ago, there was a great upsurge in such software. I suppose it started with Versions and Cornerstone, then continued with Git clients like Tower, Github and SourceTree.

Yet none of them really innovated on their command-line brethren. This may seem like an odd desire, but there are areas where GUI clients can improve on the command-line clients backing them.

Support the user’s workflow

In one talk at NSConference, Aral Balkan once said that “your UI shouldn’t look as if your database had just thrown up all over it”. This is what I’m reminded of when I look at SourceTree.

It feels like someone took a window and just threw in a pushbutton for every action, a text field for the commit message and a checkbox for every option. It presents me all of Git at once. It overwhelms not only me, but also my screen space, as it usually shows much more on the screen than I need at any single time, but since all of it has to be visible, it is all too small to be comfortably used.

All version control software needs to become more aware of context, of “what is it time for now”. Give the user a screen display that only shows things relevant to the current operation.

The File List

The file list is not just useful for when you want to commit a change. It can help with code navigation: I’m in a big project, I’ve edited a few files, I’ve viewed many more. I need to get back to that spot I started my change in after implementing some needed subroutines and their tests. The recents list in Xcode won’t help me there, too many files I came past on my search for the right spot, some in the main tab, some in multi-file search. But my VCS knows which files I just touched.

I just go into the VCS GUI client, to the list of changed files, and there are the 5 out of 50 files I actually changed. And now that I see these 5 filenames, I can recognize what the colleague named that file. I’ve quickly found it.

Why don’t more VCS GUIs support code navigation? Let me search. Let me select. Heck, if you wanted to get really fancy you could show me the groups in the Xcode project that my files belong to. Analyze, correlate.

Peripheral Vision

The one thing all GUIs for version control systems provide these days is what I’d call “peripheral vision”: They show a constant list of files in your repository and show which ones have changed, live.

You don’t have to actively call git status. Whenever a file changes, it shows up.

By having these updates show up on their own accord, I can be warned of external influences automatically. SmartSVN, for example, shows both the local and remote state of a file. So if a colleague modifies the Xcode project file on the server that I’m currently editing locally, I immediately see in my peripheral vision that I have a pending conflict.

Each Version Control System an Island

Most of the version control GUIs I’ve mentioned ignore one important fact of most peoples’ work with version control: Sure, it is useful for single developers as unlimited undo, but most of the time it is used in collaborative environments.

If I’m collaborating with someone, isn’t the most important thing here to keep me abreast of what other developers are doing? Why do all the GUIs except SmartSVN with its horrible non-native Java grab-bag UI focus so much on making me see my working copy that is right here in front of me, and then come up surprised when something on the server changes and drop me into an external diff client without any hand-holding?

Apart from showing remote status, why don’t they keep me informed of incoming changes? Why does Cornerstone only let me view the log history of individual files or folders, but doesn’t constantly keep the list of commits in my peripheral vision? Why does no client offer to show me a notification whenever a new push happens on the server?

They just don’t Learn from History

The commit history also seems to be an afterthought to most VCS GUI developers. The only human-curated part of the entire commit metadata is usually hidden on separate tabs, or at best fighting for space with the file list and lots of other UI. File names are short. Commit messages are long. Why should those two lists be forced to be the same width?

In Versions, the commit list can only be read. I can see the changes in it and the message, but can’t select a commit in the list to roll back to that commit, or branch off from it. This is one of the basic tenets of UI design: Don’t have the user type in something the program already knows. The commit hash is right there in front of me on the screen, why do I have to type it in to check out?

Moreover, the list of commits in Versions is not scannable. There are barely noticeable color differences in the date, name and commit message, and they’re too close together and separated by lines.

Ever wonder why Finder uses alternating background colors to distinguish table rows? Because it’s easier to scan: Lines are read by the mind as glyphs, additional information to be processed, whereas the “line” where two different-colored surfaces meet are just accepted as a gap between things.

That’s why so many lists use columns. That way, if you’re looking for a commit from a particular colleague, you just scan down that column, able to completely ignore the commit messages.

The User doesn’t make Mistakes

Users don’t make mistakes. Bad GUI just leads them down the wrong path. When a user makes a mistake, be forgiving.

A contradiction? Yes. While most VCSes already under the hood have the policy of never losing data, GUIs can improve on that. Undo on text fields. Showing a big warning banner across the window when the user is on a detached head, which the user can see even if the window is half-hidden behind Xcode. Offering to stash changes for the user if they’re switching branches and have unsaved changes.

If the user selects three “unknown” (aka new) files and asks you to commit them, don’t just abort with Git’s standard error saying that they aren’t under version control! Try to anticipate what the user wanted. Show a window with a list of the offending files and offer to automatically stage them (with checkboxes next to them to turn off ones they might not have wanted to commit).

If a user tries to commit a binary file that has its executable bit set, maybe ask for confirmation in case they’re accidentally checking in the build products, and offer to add the file or one of its enclosing folders to the .gitignore file.

If the user tries to amend a commit, be smart and warn them from changing history that’s already been pushed. But don’t warn them needlessly. Can you check if any remote is ahead of this commit to detect whether the user has already pushed the commit to be rewritten? If not, it’s safe, just let them do it.

Remote Possibility of Supporting a Workflow

I’ve mentioned how we need to try to support the user’s workflow more and how the server is under-served. This also applies to setup. One of SourceTree’s standout features is that it lets you not only enter your Github or Bitbucket URL, but also shows you lists of your remote repositories.

You can set a default folder where your programming stuff goes, and then just select one of your remote repositories and click “clone”, and poof, it checks it out, adds a bookmark for it, and opens it in a window and you’re good to go. Heck, Git Tower even lets you specify the address of an image file in your repository to represent it in the list for quicker scanning.

Why has no VCS GUI added a Coda-style project list and automatically looks for project files and their application icons in a checkout to pre-populate the icon?

Re-open the repositories (yes, users may want to open several at once, deal with it!) the user had open when your app was quit. And for heaven’s sake, why are there VCS developers who don’t know how to make their application accept a folder via drag & drop on its application icon in Finder or the dock so I can quickly open a working copy that’s right there in front of me without having to wait for an open panel to open up?

Promise to be Better

I’m sorry, this has turned into a rant there. But the fact is, there are so many VCS applications, yet most simply expose the commands of their command line equivalents. Why do so few protect me from commonly made mistakes and focus on what me and my colleagues want to achieve instead and support us in that?

How can products connected to servers be so asocial?

Enough with the NSStatusItems! Teach your app its place!

I see more and more applications implemented as NSStatusItems in the upper right side of the menu bar. In this posting, I’ll lay out why this is a worrying development, and why you should rarely implement NSStatusItems.

Screen real estate

The menu bar is very precious screen real estate, and the most expensive part of your computer. It takes up a permanent 22 points at the top of your screen (if you have several screens, it may even show up on every screen). The menu bar is fixed in position and size, different from other windows, and no other window can inhabit these sacred pixels. You can’t switch it behind another window. It is always visible, always immediately clickable.

It is also used for an important part of the user interface of the current application. All of an application’s menus have to fit into this area. There is no scrolling, no wrapping to a second line.

Perspective of importance

One of the fundamental rules of UI design is to arrange UI elements by their importance. Things that provide information the user constantly needs to be aware of, or that are constantly used should always be in view/at a single-click range, while things the user uses less can be relegated to less easily reachable spots that might require several clicks to get to.

The document window (or main window in the case of a shoebox application like iPhoto) is the top of this hierarchy. That’s what the user works with most of the time and where her attention is focused. Floating palettes are also near the top.

Things you can’t put directly in front of the user like that go in a menu, where the user needs to click to discover them or trigger them. If something is even less important or needs to display information more complex than is desirable to put in a menu item, it can go in an auxiliary window shown by a menu item.

Popovers, while relatively new to the scene, are kind of halfway between these two. On one hand you need to click to open them, like a menu, on the other hand you can’t have as many of them as you can have menus. They also occupy a half-way position between a menu and a modal window. They can contain more complex controls.

NSStatusItem

So, now that we know how limited room in the menu bar is, and how it is the second go-to location after you’ve run out of main window space, where does NSStatusItem fit in here?

Well, NSStatusItems can show information in their icon, and otherwise work like a menu. They can immediately react to a click (like the “Notifications” icon in the upper right of the screen) or show a menu, or a popover.

They are also visible across all applications. As such, they are a permanent, most reliable fixture in the user interface. Always visible, always clickable. It is prime real estate if there ever was one.

From this follows that it should only hold functions that inhabit exactly this place for the user: Something that is needed no matter what application is frontmost. Something that is constantly needed, not just occasionally when the user is working on one particular project. Or something that indicates some important piece of information, like how long the computer’s battery will last.

The reality of status items

Compare that to the reality we’re living with today: Every Twitter client I’ve used so far had a status item by default. A status item and a dock icon. At the time of this writing I’ve written well over 57’000 tweets, but even I don’t think that Twitter is that important. One dock icon is fine for seeing new tweets and posting a new one. It’s one click away.

I’m sure some users disagree, but really, is that the majority? Does it have to add that status item and take up dock space by default? Can’t it just leave this as a feature that the user can activate if they think it is needed?

Similarly, there are applications that perform periodic clean-up tasks in the background. Maintenance. Do I really need to see those applications’ icons in my menu bar permanently? Couldn’t they just show their icon when they are doing work, then remove it again? Couldn’t they be a GUI front-end with a background helper application that magically does its work? How often do I manually need to trigger a re-scan of my movies folder to see if it contains new files if the application watches the folder for changes anyway? If this really is just a workaround for rare bugs, why not make me launch the GUI front-end to achieve that and stay out of my menu bar?

There are applications that let me run a server, for testing, locally, on my computer. Why can’t they just be a regular GUI front-end with the server as an invisible background process? Why can’t they just add a bookmark file somewhere that I can launch using Spotlight instead of making me use a different item in the precious status item area of the screen to open the URL for that server?

Why does everyone have such an inflated sense of the importance of their app that they need to have an icon in the menu bar?

How to build a good restaurant web site

The typical restaurant web site, I’ve found, is completely useless and a waste of money. Here’s a short list why:

  • Most of them are 100% Flash. Nobody who owns a smart phone can view them. At all. So if I’m on the road and want to know whether your restaurant is open, I can’t see that, just because you wanted a photo slideshow with crossfades.
     
  • Most of them are missing the opening hours and/or the address. Those that have them often hide them in lots of prose. Someone on the road with their phone will want to know that information first.
     
  • Most of them are missing the menus. While some of them have the permanent menu, particularly the daily lunch deals or weekly changing menus are why a prospective customer might come back to your web site.
     

Of course, everyone can moan and complain, so here’s my short and sweet summary of how to make a good web site for a restaurant:

  • Put the following on your front page: Your address (including the city and country, this is the internet, after all!), your opening hours, and a tag line like “Greek taverna” or “Italian kitchen” or “exclusive 4-course dining in separés” or something else that helps a first-time visitor immediately get an idea of your restaurant.
     
    And no, your address as the “legally responsible party” on your web site’s imprint page doesn’t count. That could be an office building for a restaurant chain. Make sure it’s clear where to go. Put a small picture of your front entrance on there so they recognize it.
     
  • Don’t use Flash. People on cell phones can’t see Flash, they just get a lego brick icon and that’s it.
     
    If someone is in your general area and wants to know where to go, they will call up your site on their smart phone to check the opening hours. Make it easy for them. You’re wasting money if half your interested customers can’t see your site.
     
  • Put your daily menu and specials on the site. This is easier than it sounds. You don’t have to pay a web designer every time. Pay them to make you one editable page where you can just log in with a password and edit the text from any browser. You probably already type up the daily menu and print it every day. Just copy it over there, click “Save” and anyone on the internet (potential customers sitting at work thinking where to go for lunch together, for instance) can immediately see what you have to offer.
     
    Your permanent menu is nice, but people who’ve been at your place a couple times probably have a general idea what’s on it already. The specials change daily or weekly, everyone has to look those up.
     
  • Bonus points: Include a phone number (or even better, a web form) where people can make reservations. Ideally they’d be hooked up to your reservation system and immediately give feedback. Otherwise, make sure you check your e-mail often and confirm reservations in a timely manner.
     
    If you want to provide prose or an image gallery, put them on extra pages, so cell phone visitors don’t have to download all of that over a mobile connection.
     
    And finally: Pay for a professional web designer and professional photographer. It will show in the end result.

That’s my short list of how to make a good, useful restaurant web site. I hope it will help restaurant owners get the right thing from their web designers.

Creativity Finds a Way

Uli's xDraw XCMD screenshot

Great observations

There is currently a nice little discussion on HyperCard going on in the comments on Stanislav Datskovskiy’s article Why HyperCard had to Die:

The article looks at the right facts, but I think draws the wrong conclusions: Yes, HyperCard was an echo of an era where a computer was a complex machine, and its owners were tinkerers who needed to customize it before it became useful. Yes, when Steve Jobs came back, he killed a lot of projects. And the Steve Jobs biography mentions that he doesn’t like other people screwing around with his designs.

But I do not think this automatically leads to the conclusion that Apple is on a grand crusade to remove the users’ control over their computers. Nor does it mean what many of the commenters say, that Apple is trying to dumb down programs and that programmers are underestimating their users.

How people work

Every programmer knows how important a coherent whole is: If a button appears in the wrong context, it will easily (and unintentionally) trick the user into thinking it does the opposite of what it really does. You can add paragraphs over paragraphs of text telling your users the opposite and they will not read it.

This is not because users are stupid, but because users “scan”. Screens are complex, and full of data. For the user to find something without spending hours of their life on it, they tend to quickly slide their eyes across the page, looking for words that come from the same category as the thing they are trying to do next.

This is a human efficiency optimization. It is a good thing. If we didn’t have this mechanism, we’d probably all be autistic, and incapable of coping with the world. Once a word is found, the user starts reading a little bit around it to verify the impression that this is what they want, and then they click the button.

It seems trivial to engineer a program for that, but it’s easy to overlook that a computer is not a single application at a time. There are other things happening on the screen, there may be other windows open. There may be system alerts popping up. Even if they are marked with each application’s icon or name, chances are that most users are too busy getting actual work done to memorize application names and icons. They won’t be able to distinguish what is your application, what is another.

Similar with haxies. Any halfway successful programmer probably has a story of how they tried to track down a crash or oddity a user encountered in their program that was actually caused by a plug-in or haxie that injects itself into every application to modify some behaviour system-wide. And once they are installed, even I occasionally forgot I had them installed. Or didn’t expect it to have an effect; Why should a tool that watches when my cursor hits the edge of my screen and then remote-controls the cursor on another computer as if it was an attached screen cause the menu bar to just randomly not show up when switching between applications?

Software is complex. Designing reliable, usable software is complex. In a comment, Stanislav had a great analogy for this (in response to someone’s pipe dream that one would just have to use HTML, and the technical stuff was all already done, you just had to add the human touch):

All the pieces of the world’s greatest statue are sitting inside a granite mountain. Somebody just has to come and chip away all the extra granite, adding the human touch. The technical problems are all virtually solved!

Software is hard. I don’t say this because it makes me sound cooler when I say I’m a programmer, but because you’re not just building a thing. You are building behaviours. HyperCard was notorious for being the tool for the creation of a number of the ugliest, least Mac-like programs ever released on the Mac. Because even with the best camera, your movie is only as good as the camera man.

So was Steve Jobs happy to get rid of HyperCard and stop people from screwing with his design? Probably. Was he forced to let it linger instead of killing it outright because he didn’t want to lose the educational market? I can’t disprove it. But Steve Jobs was also known to be an idealist. He genuinely thought his work would improve the world. What would he gain by making everyone dumb and uncreative?

Why assume malice when Occam’s Razor is a much better explanation?

You can’t hold a good idea down

When the Mac was originally released, it was intended as a machine for everyone. To bring computers to the masses. Almost from day one, the goal of Apple Computer, Inc. has been to drop the darned “Computer” from their name. Compared to the mainframes of the time, the Apple ][ that started the home computing revolution already was a “dumbing down” of computers.

Was this the end of the world? Should we have stayed in the trees? Will people become un-creative? Look around on the net. There are people out there who have no programming skills, who dig around in the games they bought and modify them, create their own levels, use existing game engines to create a game about their favorite book or TV show. Heck, there are people out there who create a 3D game engine in Excel.

If there is one thing we can learn, it is that Creativity Finds a Way.

HyperCard was designed in the late 1980s, for hardware of the time, for what very smart people thought would be the future at the time. Being creative with a computer, at the time, meant writing code. So they gave us a better programming language. Ways to click on a “Link to…” button to create the code to change pages. Not unlike Henry Ford’s competitors would have built you a better horse, but not a car.

Yes, I am saying that the developers of HyperCard didn’t quite anticipate the future correctly. They didn’t anticipate the internet, for example. That’s not a shame. It was ’87 back then. I didn’t get what the internet would be good for in ’91. I probably wouldn’t even have managed to invent a better horse. But anyway, all I am saying is that HyperCard’s creators didn’t know some things we know now, and probably made some compromises that wouldn’t make sense now.

The world has changed: This is 2011! All our programs do so much more. You can create 3D graphs in Excel, colorful drawings and animations in Keynote, and upload it all to the web with Sandvox. So many tools are available for such low prices. Why would you bother with a low-level, rudimentary tool like HyperCard when all you want to do is a movie with some branching?

A new tool for a new world

After all that, it might surprise you that I still agree with everyone in the comments who says that we need a new HyperCard for the 2010s. However, I do not agree that any of the examples the commenters mentioned (or even HyperCard as it shipped) are this program. Yes, Xcode and the NeXT-descended dev tools, and VB and others use the Rapid Application Development drag-and-drop manipulation to lay out your UI. But guess what? So does Pages.

Yes, you can use Ruby and Python and Smalltalk to branch between different choices. Or you could just use links to move between web pages built using Sandvox.

Yes, you can build real, runnable applications from your work with Java or AppleScript. But why would anyone want to build an application? Movies can be uploaded to YouTube, web sites can be built with WordPress, and I don’t have to transfer huge files to users. I just send my friends the link, and they know what to do. There’s no installer.

Our computing world has become so much richer, so much easier, that it is more efficient and actually smarter to just create your stuff with those tools that we old HyperCarders see as dumb. They can stand on the shoulders of giants, and spend their time creating the best possible gameplay instead of coding yet another 3D renderer. That is why HyperCard 2.4 just won’t cut it, or as David Stevens commented on that very same article:

most people get on a train to go somewhere, not because they really want to lay track, which explains the shortage of track laying machines in yard sales, and the demise of HyperCard.

The new HyperCard won’t be like HyperCard. Maybe the web is enough. Maybe it will just be a good “web editor”, like it used to be included in every copy of Netscape back in the day.

Or maybe, it will just be a niche product aimed at people who find that they want to do more than their tools let them do. This will not be the typical movie-maker, podcaster or writer. Like the directors, radio hosts or journalists in the generations before them, those will specialize. They will be exceptional at directing, making a show or researching the truth. But they will not care how the camera transports the film, they won’t care how their voice is really broadcast as radio waves and re-assembled in the receiver, nor how to build a printing press.

The people a new HyperCard is aimed at will be a person like you, who saw HyperCard, and at some point stood up and said: This is not enough. I want to create more. And then maybe went out and bought CompileIt!, which let her use the system APIs from the comfort of her HyperCard stack, only needing to use the scary memory management stuff when absolutely necessary. And then went and bought MPW, or THINK C, or CodeWarrior, or Xcode.

A real programmer doesn’t program because she wants to use HyperCard. A real programmer programs because she wants to. Because she just has to. A real programmer doesn’t limit herself to that one language and IDE she learned once so she never has to learn anything else. A real programmer learns new programming languages because each language teaches her a new way of looking at or solving a problem. A real programmer has been programming since she opened PowerPoint for the first time. She will find a way.

It’s been like that back in the days of HyperCard. Why shouldn’t it be like that again?

The Sandbox, Pro and Contra

Blog sandbox

Sandbox?

With Lion, Apple has introduced the “Sandbox”. Essentially, it is a way to un-break the unix permission model for the internet age. In ye olde days, user accounts and permissions were designed to prevent one user on a big mainframe from screwing with the files of another working on the same mainframe.

But in this internet age, our user account runs a lot of code that doesn’t come from us: Scripts from web sites, applications a distant acquaintance e-mailed us … While we should be careful to not run untrusted software, the matter of the fact is that we often have no choice, and when we do, we might not have enough information to make an educated decision.

When we run a screen saver, we run it with certain expectations: It should save our screen, not access our address book. It is time that these expectations get formalized in code and property lists, so the computer can enforce them for us. That way, even beginners can be protected.

How does it work?

In short, your application gets locked out of a number of “sensitive” areas. That includes the file system (except for a few places like your Preferences file and your Application Support folder), and also inter-application communication (e.g. AppleScript and other ways of accessing/talking to other applications). However, this happens transparently to the user. If the user drags a file on your application or its icon, or selects a file in an open panel, a temporary exception for this file is made.

It becomes part of your application’s sandbox (at least for a while — a restart of your application will exclude it from the sandbox again). Similarly, while your application can not communicate with other applications, the user can run an AppleScript in script editor, and *that* will be able to access all the applications it likes.

This means that, if a fake screen saver wants to e-mail your address book to an e-mail harvester, it will not be able to.

But my application uses AppleScript to …

This is a problem for those developers who want their application to run a script (e.g. automatically to import data from elsewhere). While I agree that’s annoying, a lot of the things developers do with AppleScript are workarounds for other issues (like the folder path example here). Chaining together applications is an engineer thing to do, and often causes bad usability. Users prefer a complete solution.

I hope many developers will consider banding together with developers of related tools, and do something not unlike Coda (Panic’s Transmit engine, The Coding Monkeys’ SubEthaEdit text engine). That avoids all the inter-application communication, gives you better control over the user experience (including sensible error messages instead of cryptic messages from AppleScript), and will probably make you Apple’s favorite child.

I can understand how Apple might want to bolt down security and instead provide dedicated API. Of course, if you’re the only one who needs the clicked Finder window’s folder, they might just not spare the manpower and you’re screwed. But I can understand their priorities, even if I only partially share them. And the above usability arguments may work as an encouragement for Apple to continue down this path.

Applications as entitlements

However, I do think that Apple should be flexible. Sandboxing will be mandatory to get applications approved for the Mac App Store in a few months. While you will be able to get exemptions for some of these restrictions, they are called “temporary”, which means what Apple giveth, Apple taketh away again. If they giveth at all.

Therefore, it would be great to have applications as entitlements: I.e. someone who needs to AppleScript the Finder could just add a com.apple.Finder entitlement, and then the user would get notified of that on installation. That way, if I install a screen saver and I get asked if this may access the address book, I know something is wrong. If it doesn’t ask for the address book entitlement, the address book API just wouldn’t work. Security. (Of course, there needs to be more granularity for the address book in particular, I would give applications access to my “Me” card, but not the rest of my address book – bad example).

The advantage of this approach is that Apple’s reviewers would just have to look at the entitlements to find out whether someone is doing something freaky. And if you used an unusual entitlement, Apple could request clarification, and then either require the use of better API, or reject, or accept.

It may still leave us at Apple’s approval mercy, but it is at least flexible enough to allow for many utilities to be re-added to the app store.

What can we do?

File bugs. If you like one of my suggestions above, feel free to request such behaviour from Apple. It will probably be marked as a duplicate, but it will get counted. But make sure you file the bug not just as a general mechanism, but in what way it applies to your application. When I talked to some Sandbox engineers at WWDC, they seemed very interested in accommodating our needs. Whether they will be able to do probably depends on what their superiors decide, but we have the engineers on our side. Even if you just write a short bug report, it will help. Besides, you can’t complain if you haven’t at least put your opinion on record.

 

Death will take care of that…

The Macintosh Intro welcome screen

The past

When I got my first Mac, it came with the Macintosh Intro, a disk that held a little tutorial explaining how to use various parts of the Macintosh. Among the topics covered was what a mouse is and how to use it (and even that you can lift it off the table and put it down in another spot to have more space to move in a particular direction).

As he said during his AllThingsD interview with Walt Mossberg, when someone suggested including a touch-typing tutorial in this intro as well, since many people did not know how to use a keyboard, Steve Jobs simply said not to bother as “death will take care of that”.

The present

When you look at today’s Macs, it appears this has already happened. Not only is there still no keyboard tutorial, the mouse tutorial is gone as well. Heck, you don’t even get a nice little character in an Apple sweater grabbing the menu bar with his hand and pulling out a menu, or zooming and closing a window. It is assumed that everyone today knows what a window and a menu are, and how to use them.

Which isn’t that far from the truth. Children today see other people using a computer and a mouse all day long, be it on the bus, in bank offices, stores or when watching their parents buy plane tickets for the next vacation at home. Their parents answer their curious questions, and they probably even “play computer” with cardboard boxes. In most high schools, students are taught the basics of computer use, even up to writing Excel formulas. Typing and basic computer usage is a necessary, ubiquitous skill today.

The “Application”

One common problem, both on the Macintosh as well as on the iPhone, is that current generations of users are having big problems understanding the concept of an application. You see this in app store reviews that complain to a third party developer about the cost of an application, and that this should come free with the phone (tell that to Apple!), you see it in the confusion users who closed the last window of an application have if the menu bar doesn’t belong to the (inactive) frontmost window, you see it in the casual way in which people type their password into any web site that claims to need it. The distinction between a browser/operating system and the actual applications/sites running in it is unclear.

Certainly, some of this confusion stems from the fact that this is confusing. An application with no windows, only a thin menu bar indicating it is still there is such a small clue that application developers should work hard on avoiding this situation. The system asks for passwords in so many situations without a non-geek explanation, without any cause obvious to the user. If Mail.app asks for a new password on any error, even if the error was not an authentication failure, just to cover a few edge cases, the user is bound to get used to arbitrarily type in the password. If the user has no way of distinguishing valid from invalid password requests anymore, then the added security is lost, and all that remains is an annoyance. It’s like “Security theatre”.

However, some of the confusion may come from the users’ mental model. Every user has one. Most of them are built alone, simply by observing behaviour coming out of the machine, without the inside knowledge we computer engineers have. If your mental model of how a computer works was built twenty years ago based on outside behavior of a completely different system than we have today, it’s no surprise that some spots where you filled in the blanks might lead you to the wrong conclusion. I’m not blaming the user. Most of the model is correct and works. How would you know part of it is wrong?

The future

Just like people in the original Mac days thought users would not understand keyboards, I hear people today saying that users will never understand multi-tasking, will never understand what an “application” or an “app” or a “web site” are, and how they differ and how they are the same.

I don’t see it.

Humanity has adapted to changes in the world for millennia. They are flexible enough to understand these concepts. It took about 30 years for keyboards to become well-known enough that the basics of keyboard use did not have to be explained anymore (even if the “alt” key still mystifies many). People learn to cope with things they need, and they get used to the things they are confronted with every day.

More than now, where people still rely on a vendor to give them their applications with the hardware, the future will include people getting “apps”. Like children’s TV shows today warn kids of expensive call-in TV shows and shady ringtone subscriptions, the future will see them mention apps and purchases. As ruthless as it may sound, the truth of the matter is that, within less than a generation, people unfamiliar with the concepts will have died out. At least in the computerized so-called “western world”.

You’re kidding, right?

No. Though I’m simplifying. Of course, this is a two-sided development. Users will become more familiar with computer concepts, just as they now have a basic understanding of power outlets and lightbulbs. Similarly, technology will become more approachable. Applications of the future may not have some of the issues that confuse users today, but the general concept will stay present and will have to be understood.

Just you wait and see.

Update: Finally found a clip with the exact Steve Jobs quote, so linked to it and adjusted the article title from “They’ll die out eventually”.

The Menu Must Die

[A menu in Mac OS X]

When the original Macintosh user interface was created in the mid-1980ies, development happened under an amount of constraints that we can hardly wrap our minds around anymore today. Add to that the foresight and genius that went into its development, and consider how well the Mac UI has held up, and it is not surprising that some things have remained unchanged and unquestioned over decades.

One of these great, and undoubtedly very characteristic aspects is the Mac’s menu bar. Different from other operating systems, there is exactly one menu bar at the top edge of the screen, where it can be quickly reached by violently shoving the mouse upward, making it as easy to reach as a mile high button.

The menu bar is a very compact container for hundreds of commands that need to be accessed quickly and frequently, but do not fit into a more reachable location, like as buttons in the main window, because there are more important actions already taking up room there.

However, looking at an individual menu, the constraints of the time become apparent: It is a display that seems too simple for all the complex behavior it contains. Part of it is probably just an attempt at mimicking the appearance of an actual, physical menu, to comfort a less geeky audience. But you can still see that part of the design was also engineered to be so simple that it could be shown and hidden quickly: Lines of text, the occasional separator.

[A menu with checkmarks in System 6]

But is this really a good, contemporary user interface? Think of checkmarks: There is no way for a user to discern by looking at a menu item whether it will actually function as an action (pushbutton), toggle (check box), or mutually-excusive selection (radio button). Back when the only way to show a menu quickly enough was to save the pixels underneath it so they could be restored after drawing the menu directly into the back buffer — instead of going to all the effort of creating and tearing down a window, that was a clever optimization.

But why are we still doing that today? Our menus are actual windows, menu items can be actual views, and we spend cycles like mad to composite our menus with transparency. Most of the differences between menus and windows have gone: We have multi-tasking now, so the OS needn’t stand still while a menu is open. It has been found that holding down the mouse to keep a menu open and then releasing is too complex and error-prone an interaction (though it was originally intended to make every menu command reachable with “only one click”), so we can now click once to open a menu, and a second time to select something.

[A Popover window on the iPad]

And the iPad (where you don’t have a mouse pointer, and thus things at the edge of the screen are no easier to reach than those closer to the center by shoving a mouse towards them) actually has pop-overs, menu-like constructs that pop up from buttons and contain regular user interface elements. This is what should be brought “back to the Mac” with Mac OS X “Lion”.

We no longer need spartan text lists, we need pop-up windows containing recognizable user interface elements for quick access. Be it pushbuttons, checkboxes and radio buttons (like we have them already in menus, by function if not appearance), search fields (like Spotlight), or sliders.

Maybe we should even take the time and bring back tear-off menus (available in HyperCard on System 6 and before, on NeXTstep, and planned as a major UI element for the aborted “Copland” version of Mac OS 8), which can serve as both quickly accessed menus or floating palettes, whatever the user prefers … ?

[A tear-off tools menu in HyperCard]

Hacking the Press – A point for usability in press kits

[Screenshot of the folder window for an example press kit by Realmac Software]

I once saw Adam Engst, of TidBITS fame, hold a talk called Hacking the Press at the Advanced Developers’ Hands-on Conference (the first successor to MacHack). It was a great introduction to how the press works, told with the average programmer in mind, translating the life of a journalist into words we geeks can understand. I don’t remember much of it in concrete details, but whenever the topic of press releases comes up, I realize that I know much more about this stuff than by all means I should, so I guess Adam managed to insinuate himself into my brain quite well.

Recently, Nik Fletcher of Realmac software gave a great interview about press kits, press releases and related matters on the MDN Show Podcast, and I realized that all that great information that was provided there was missing one important answer that I probably first heard from Adam:

Why do I need a nice press kit?

Nik and Scotty were kinda struggling with vague benefits, like “being nice” or “convenience”. But nothing hammers home the point better than a bit of enlightened self-interest:

There are oodles of Mac applications out there. Moreover, there are tons of good ones among them. And all of them send out press releases to the same three score or so journalists who, like Adam, have pull in the Mac world. All of these applications are equally worthy of coverage. So, all those journalists are sitting there, sifting through huge piles of press releases for both bad and good applications, picking out the worthwhile ones. And once they have those, they have to go over these releases again and again, and find the ones they will finally cover in the space they have.

Some choices are obvious: If it’s a “big”, well-known product, it gets covered. If some other similar product has been in the headlines somehow, or hasn’t been in the press (or that particular publication) for a while, a product may get covered to “fill that slot”. Photoshop not done much for you lately? Great! More coverage for Pixelmator and Acorn! After all, users are still looking for good painting and retouching applications. Similarly, if a problem is on the journalist’s mind at the moment, an application that addresses this issue is more likely to be covered.

But what if you don’t fit that pattern? Well, you have to compete with the rest of the worthy apps. It’s a tough call. Now, if your application has a gorgeous press kit with beautiful screenshots/box shots/whatever of your product, and provides a lot of background information and links to relevant articles on Wikipedia etc. that the journalist can make use of for their article, that may just tip the balance in your favor.

We all know how cool it is to find a list of links and information about a particular topic: You start on one Wikipedia page on embroidering and suddenly you’ve read half the site, getting to modern computing via the Jacquard loom, and you’ve learned some interesting things in the process.

You’ve just helped the journalist find an angle that helps cover your product. They can write a witty little intro piece about embroidering, how far it’s come, and if you’re lucky they’ll say that your embroidering application is what all this has naturally led to. Even if the journalist has to truncate the article and that stuff goes away again, the journalist will remember. There’s a personal experience that now connects the journalist to your application, and helps you when it’s up against similarly worthy opponents the next time:

“Let’s see what interesting things the EmbroiderWorks press kit for their new product contains…”

Yes, I’m aware I’m illustrating the ideal, hit-the-jackpot case. But the bottom line remains: When it comes to being covered in the press, you are not just competing against similar applications, you’re also in competition with every other application out there. Many of these are as well-executed as yours.

Having a well-structured, discoverable press kit with the best user experience you can come up with, including URL clippings (.webloc) to lead them to your web site at a double-click, including spec sheets, including a collection of dictionary entries and sources for any required domain knowledge, maybe even including suggestions for articles on topics that include your application, but also others … all of that can help you get ahead of the others and turn a tie into a win.

Double click is a shortcut

IMG_0364.jpg

John Gruber mentioned in passing that people are confused about when to double-click and when not to. It’s true, but that doesn’t just apply to users. I’ve seen many application developers not knowing (or simply not caring) about when to use a double-click, and when not to.

The simple matter of the fact is: Double-clicks are a shortcut.

Look at the Finder: A single click selects an object. A double click opens it. A double click here is simply a shortcut for a single click (“select this item”) plus the most common menu item used on this item (“File” -> “Open”).

Many users are simply never taught that this is why to double-click. Many think “Files are always double-clicked”.

In the dock, you can’t select an item. So, a single click already triggers an action.

I won’t count minimizing windows by double-click here. Why? Because it’s actually a historic feature. Back in System 7, you couldn’t minimize windows. There was a title bar with a close button and a zoom button. Someone wrote a nice extension called “WindowShade” that rolled up a window into a title bar. Since they couldn’t add a widget to every window, and single clicks already dragged the window, they just decided to use a double click. When that extension got rolled into the system with System 7.5, the shortcut stayed, and never got removed, even after MacOS 8 added the collapse box widget.

I don’t know why this feature even still is in OS X. We have a “minimize” widget taking up valuable screen real estate. Why even leave something like that in? So many newbies accidentally trigger this and wonder where their window has gone.

Inference vs. Knowledge

I’ve blogged before about Sensible defaults and Anticipating User’s Needs. One suspicion that the feedback I received to this article raised in me was that people are very unclear about when inferring user intention is good, and when it gets in the way. Of course, this is not easy, and thus there’s no clear-cut answer, but if you’re aware of what you are doing, you can find the right way.

Distinguish Inference from Knowledge

Just like when designing any other algorithm, there are two extremes: On one hand, you may actually know what the user is trying to do. E.g. the user chooses the Quit menu item, and everything has already been saved: You know that the user wants your app to go away. It’s straightforward to implement.

On the other hand, there are also cases where you do not know what the user wants to do. E.g. the user has chosen Quit, but has unsaved changes. Does the user want to discard all changes made? Or did the user forget to save?

What can we do in the second case? Well, for one, we can apply a heuristic. We can assume that the user doesn’t want to lose data, and just save implicitly and then quit. Any user that actually didn’t want to apply the changes she did would simply be screwed. We could assume that the user knows what she’s doing and just quit and lose all unsaved changes. But everyone makes a mistake, and computers should be forgiving. No action the user initiates should be an irreversible mistake. Heck, even my washing machine lets me pause it and add a few more socks I found behind the couch. Why shouldn’t my Mac?

Still, both are options we have. There’s also the third option that we have: Put a fat stinkin’ dialog in the user’s face. This is the equivalent of grabbing someone about to leave a store by the collar and asking him: HAVE YOU PAID YET? This may sometimes be necessary, but generally you want to be nice to your user, you don’t want to halt them in the middle of their work. Ideally, you’d just quietly do the right thing.

The thing to keep in mind here is that you simply do not know what the user intended to do. Even worse: although the user explicitly said Quit, you do not know whether you should do that. This is neither bad or good, but it is an important thing to keep in mind when you design your program: How certain are you, that what your app is doing now is what the user wants?

If you are very sure, go ahead, do it. But what if you’re not so sure? The first rule should be: Do No Harm. The developer is given care of the user’s valuable data, her work, so he should not damage it. Take our quit example from above: In most applications today, you should not just quit if there are unsaved changes: Not saving would lose whatever changes the user just made, saving could damage the previous state of the data by applying changes that were never intended to be saved.

So, should we ask?

What if the user could just re-open the document and undo these damaging changes? In that case, just saving would actually be a much better choice. Nothing is lost, and the saved undo stack in the document lets us revert any damage.

So, you see, even in a common case, which is done so frequently in every application on your computer, you should actually be asking: What do I know? How sure am I of this knowledge? Is there a heuristic that lets me do something implicitly, but doesn’t hurt if I get it wrong?

Ask the User, but Do Not Ask the User

Another thing that is often done wrong when applying heuristics, is how to handle the situation where you aren’t sure what to do. Yes, you need a decision from the user, but that does not mean you should ask. Every time you put up an alert, you interrupt the user in their work, and force them to read the alert text and make a decision. If you do this too often, the user will just get used to clicking one of the buttons, without reading. The one time they actually made a mistake, they’ll already reflexively have clicked that button and be screwed.

Or in other words: Every time you put up an alert panel, God kills a cute little anthropomorphic paperclip.

You have many options to get input from the user, an alert isn’t the only one. And by alert, I mean everything that is kind of modal, including the weird status messages with buttons iTunes shows at the top of the window in its LED display.

For example, if your application detects at startup that it was quit with a document open, should it ask the user to reopen? I say no. What should we do instead? Well, look at your application’s workflow and typical use, of course!

If your application is mainly for generating documents, like GarageBand, where you typically create a song or Podcast over a period of time, then export it to a more standard audio format and publish it in some way, the user will likely want to return to a document. In that case, just reopen what was open last.

If your application is mainly for editing documents, for working with many documents, or for polishing and revisiting various documents, you’ll want to provide a “recent items” list instead. You can just use the built-in system menu, or you can additionally bring up a welcome window whenever no documents are open, showing recent items, templates for new files and an “open” button, like it’s done in Keynote or Xcode.

And if your application is one of those single-window monsters, for managing sufficiently distinct sets of items in a central database, you can put a “recent items” section in your sidebar. The user gets the main window, and has quick and obvious access to their most recently used item, without you loading a potentially huge item the user is not that likely to use.

Yes, I’m really saying that not asking the user, not doing anything, is OK sometimes. Your app doesn’t know with enough certainty what to do. You don’t want it to annoy the user. However, if you can infer a reasonably common case, you can still design your application to support the user in that task by making sure that, at this moment, the controls to achieve this common task (e.g. grabbing the most recent file) are within easy reach.

Also, sometimes an action is so nonsensical that you can be sure the user was trying to do something else. Instead of just telling the user “No can do”, your app can infer what they might have wanted to do, and offer to do that instead. You’re putting up an alert anyway, so why not put something useful in it?

You are not the typical user

When evaluating how certain you are about the user’s intents, also keep in mind that you are someone involved in application development. The way you use and understand the application goes much deeper than that of your users. You may know more about kerning, anchored selections and grammar-checking algorithms, while the users of your text editor may know more about particle physics or elaborate stitching, or whatever topic they are writing about when they use your text editor. A novelist has different needs than someone who mainly writes correspondence, who has again different needs from someone writing a technical manual.

So, when you infer that “everybody hand-picks their photos off the camera”, keep in mind that that may not at all be true. Someone taking a large amount of photos on a holiday to Lucerne, Switzerland, will probably want to quickly empty the memory card of the camera by transferring all photos to their Mac, and then quickly head out again with a fresh card, to enjoy the place more. They will want to just plug it in and have it import all while they take a shower.

On the other hand, a photographer on a scheduled outdoor shoot may be looking over each image anyway, to see whether he got the coverage he wanted, or whether he has to keep going to get the right picture because of a pedestrian in the background who was picking his nose at just the wrong moment, or a smudgy fingerprint on the lens that didn’t show up on the small camera screen.

So, how do I find out whether I’m right? Although there is no patented recipe, there are simple checks you can perform right away: Get a second opinion. Even if you just talk to a colleague, chances are that they use the app differently, and will spot an additional flaw in your thinking. You can give a test build to a friend and use them as your guinea pig. Ask your favorite user that e-mailed you with a “thanks for coding this” letter, how they would achieve a task that involves your new inference. Whatever. Of course, none of these people constitute a representative sample, but at least you’ll get additional data points to ponder.

And if you’re facing a big decision, you can always try more formal usability tests. But every little bit helps you gain experience, make better decisions, and discover those novel use cases you never considered.

But isn’t this inconsistent?

So you’ve tested something on your users and found that the system’s standard behavior doesn’t work. You’ve found a different solution, but now you’re wondering about consistency: everyone else on the Mac does it one way. Should you implement things differently?

The first thing you have to ask yourself is: What if a user forgets my app behaves differently? If they’ll lose data if they mix up the way your app and other apps they use work, you need to work harder. If it’s harmless … ? Well, you’ve done research to back it up, right? Tested it on a few friends, maybe even done a public beta? If you have a good reason and research to support your point that this behavior is better, then do it.

The number of beings doing something doesn’t mean it’s the best thing to do. Hey, there’s millions of flies eating doggy poo, still we humans don’t eat it…