Death will take care of that…

By uliwitness

The Macintosh Intro welcome screen

The past

When I got my first Mac, it came with the Macintosh Intro, a disk that held a little tutorial explaining how to use various parts of the Macintosh. Among the topics covered was what a mouse is and how to use it (and even that you can lift it off the table and put it down in another spot to have more space to move in a particular direction).

As he said during his AllThingsD interview with Walt Mossberg, when someone suggested including a touch-typing tutorial in this intro as well, since many people did not know how to use a keyboard, Steve Jobs simply said not to bother as “death will take care of that”.

The present

When you look at today’s Macs, it appears this has already happened. Not only is there still no keyboard tutorial, the mouse tutorial is gone as well. Heck, you don’t even get a nice little character in an Apple sweater grabbing the menu bar with his hand and pulling out a menu, or zooming and closing a window. It is assumed that everyone today knows what a window and a menu are, and how to use them.

Which isn’t that far from the truth. Children today see other people using a computer and a mouse all day long, be it on the bus, in bank offices, stores or when watching their parents buy plane tickets for the next vacation at home. Their parents answer their curious questions, and they probably even “play computer” with cardboard boxes. In most high schools, students are taught the basics of computer use, even up to writing Excel formulas. Typing and basic computer usage is a necessary, ubiquitous skill today.

The “Application”

One common problem, both on the Macintosh as well as on the iPhone, is that current generations of users are having big problems understanding the concept of an application. You see this in app store reviews that complain to a third party developer about the cost of an application, and that this should come free with the phone (tell that to Apple!), you see it in the confusion users who closed the last window of an application have if the menu bar doesn’t belong to the (inactive) frontmost window, you see it in the casual way in which people type their password into any web site that claims to need it. The distinction between a browser/operating system and the actual applications/sites running in it is unclear.

Certainly, some of this confusion stems from the fact that this is confusing. An application with no windows, only a thin menu bar indicating it is still there is such a small clue that application developers should work hard on avoiding this situation. The system asks for passwords in so many situations without a non-geek explanation, without any cause obvious to the user. If Mail.app asks for a new password on any error, even if the error was not an authentication failure, just to cover a few edge cases, the user is bound to get used to arbitrarily type in the password. If the user has no way of distinguishing valid from invalid password requests anymore, then the added security is lost, and all that remains is an annoyance. It’s like “Security theatre”.

However, some of the confusion may come from the users’ mental model. Every user has one. Most of them are built alone, simply by observing behaviour coming out of the machine, without the inside knowledge we computer engineers have. If your mental model of how a computer works was built twenty years ago based on outside behavior of a completely different system than we have today, it’s no surprise that some spots where you filled in the blanks might lead you to the wrong conclusion. I’m not blaming the user. Most of the model is correct and works. How would you know part of it is wrong?

The future

Just like people in the original Mac days thought users would not understand keyboards, I hear people today saying that users will never understand multi-tasking, will never understand what an “application” or an “app” or a “web site” are, and how they differ and how they are the same.

I don’t see it.

Humanity has adapted to changes in the world for millennia. They are flexible enough to understand these concepts. It took about 30 years for keyboards to become well-known enough that the basics of keyboard use did not have to be explained anymore (even if the “alt” key still mystifies many). People learn to cope with things they need, and they get used to the things they are confronted with every day.

More than now, where people still rely on a vendor to give them their applications with the hardware, the future will include people getting “apps”. Like children’s TV shows today warn kids of expensive call-in TV shows and shady ringtone subscriptions, the future will see them mention apps and purchases. As ruthless as it may sound, the truth of the matter is that, within less than a generation, people unfamiliar with the concepts will have died out. At least in the computerized so-called “western world”.

You’re kidding, right?

No. Though I’m simplifying. Of course, this is a two-sided development. Users will become more familiar with computer concepts, just as they now have a basic understanding of power outlets and lightbulbs. Similarly, technology will become more approachable. Applications of the future may not have some of the issues that confuse users today, but the general concept will stay present and will have to be understood.

Just you wait and see.

Update: Finally found a clip with the exact Steve Jobs quote, so linked to it and adjusted the article title from “They’ll die out eventually”.

22 Comments Leave a comment

  1. A couple of comments: the first is that the “closing the last window doesn’t close the app” thing is confusing for two reasons, the first being that *some* Mac apps, including those included by Apple, *do* close when the last window is closed. The second is that most people are more familiar with Windows where closing a window does close the app.

    As for mouse tutorials, I know Snow Leopard includes little videos about the multi-touch gestures one can enable (at least on laptops) showing what all those new gestures do.

  2. Travis Butler 2011-02-14 at 20:55 Reply

    As a counterexample, I’ll note that back in the early microcomputer/first home computer bubble days, the late 70s/early 80s, the assumption was much the same in principle. You had to learn to program to use a computer; but that was no problem, everyone was going to learn to program and it would improve society, encourage logical reasoning skills, etc. etc. And then the bottom fell out, the home computer bubble popped, lots and lots of early home computers got dumped in the closet or at garage sales, and computers in the home had to wait until a second, more successful start after pre-packaged software – and GUI-based software in particular – made computers much easier to use. While home users still had to pick up some computer concepts, I’d argue that by far the greatest adaptation was on the computer/software side of the equation.

    I see the ‘they’ll learn it/they’ll die out eventually’ argument presented a lot in support of various attempts to add power to computer UI at the expense of complexity. Some Linux/OSS advocates go as far as the early microcomputer enthusiasts, that every user should understand the plumbing that runs a modern OS, and that the world will be a better place when they do. Well, I bought into the idea back in the 80s, and suffered a large disappointment when it didn’t come true. These days, I’d like to think I’m wiser… and while I agree that there’s a certain basic knowledge set and that it will continue to grow and expand, at a fundamental level most people just want something to work – they don’t want to know how it works.

    Witness the persistence of the dancing bunnies problem, after more than a decade of user education – and from what I can see, it’s there in the teenage generation as well as the oldsters, suggesting that it’s not something that’s going to go away as the oldsters die off.

  3. http://en.wikiquote.org/wiki/Max_Planck

    A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

  4. I suspect the “die off eventually” quote was apocryphal. Kriya Systems’ Typing Tutor III was available as soon as the Mac was announced, because Kriya was one of 27 seeded developers for the Mac. When the IBM PC was introduced there were 2 applications available for it, Easy Writer (John Draper’s word processor) and Typing Tutor. A free tutorial wasn’t included with the Mac because Apple believed a 3rd party solution was better. Typing Tutor had already been the best selling educational product for about 6 years when the Mac appeared.

  5. I think we need to go back and ask if there are any significant advantages to the traditional UI paradigm of free-floating stacked “windows” for general use cases, or if it’s just an arcane relic of the failed desktop metaphor.

  6. QWERTY keyboards long pre-dated computers – by close to a century. Typewriters weren’t quite as ubiquitous as computers are today, but they were certainly wide-spread. So, suggesting users have taken 30 years to get used to keyboards doesn’t really mean anything – no one user has spent 30 years learning a keyboard have they, and I imagine that 30 years ago the vast majority of today’s users had never seen a personal computer.

  7. stsk, You may be right, I did try to find the actual quote for more context, and link to it, but I couldn’t find it on any of the usual sites. Hence the vague “various reports”. Tom Vaughan’s Max Planck quote is more proven in that regard, but really, a keyboard (or an application, for that matter) is not really an “idea”. A concept, maybe. Using that quote might be aiming a bit too high.

  8. Ed, I never meant to imply anything else. We’re talking society-level change here, not individual-level. An individual can not “die out”. And as to people 30 years ago not having seen a computer, you say yourself they had seen a typewriter. It was a professional tool for office workers, lawyers, professional writers.

  9. Qwerty has it all wrong. Stacked windows may not be good for iPhones or iPads, but large screens have their place too and they’re not going away. Using up an entire 27″ display for one program or file is wasteful for most apps, and it’s ugly too!

  10. qwerty, I think this is exactly what Apple is doing: Single-window UIs like Final Cut Pro, iMovie, the iOS and to some degree Lion have them look like an attempt at taking the chore of moving and sizing windows off the user and simply “doing the right thing” when it comes to onscreen display. I don’t see the parallel to the desktop metaphor right now (windows actually predate the desktop metaphor, after all).

  11. Chris, definitely. I don’t think the granularity of fullscreen content has to be the application level, though. You could place several applications or documents on one larger screen, and the OS would try to figure out how to split the screen between them.

  12. Maybe the level of abstraction computing and software in particular is the characteristic that distinguishes those components from other technology trends and phenomena.

    A printing press is graspable, and though learning to read is initially rough, the chore has an end.

    What of the Facebook client and it’s use of an API? Will Grandpa ever know this? Will his granddaughter need to? Want to?

    • I’m not from the middle ages, so I don’t know whether a printing press wasn’t seen as complicated magic back then as well (and are today — just see the complexity in today’s 4-color offset printing machines). I do know that light switches were seen as complex though, and cars, and many other things we consider easy to understand. How shallow that understanding is can be seen in dreams: Usually it is either light enough, or it stays dark even if we toggle the light switch, because our brain doesn’t *really* make the connection.

      I agree that APIs will probably not be understood by the typical user. But then they don’t have to. All they need to know is that there are different applications, some in your browser, some elsewhere. I didn’t mean to say that people in the future will learn *everything*. All the majority will learn is a working knowledge, and a more “standardized”, more correct mental model.

  13. Travis, the difference between “simple” concepts like applications and more complex ones like programming is the amount of time you dedicate to it, and how it relates to day-to-day work:

    Programming was a completely new, creative task that had to be added to the user’s day. Like learning how to repair a car or how to construct and put up your own shelves, it is something you can learn and do, but most people choose not to because they have things to do they enjoy more, and people to do the building for them.

    Knowing what an application is involves a few concepts, and is something anyone using a computer today needs to learn (to some degree) to use it effectively. More akin to learning how doors work, and that they can be locked (and how).

    You don’t have to know how the teeth work on a key, or even how to build a lock. You are completely right there.

  14. I don’t buy it. User confusion today comes out of a inferior representation of apps and data in todays desktop metaphor. As others pointed out, it might be that the desktop has failed.

    If you look at the huge success of the iPad as a single task platform, or at the full screen representation of professional programs, which rather seem to gain track than loosing it, then I see a clear trend to get our focus back to the task at hand.

    Fast app switching or task switching is important, even background processing can be very important. However, having multiple programs for different task on your screen at the same time normally is just confusing.

    I’m confident that single task environments are way more productive. The distractions of multitasking did a lot to harm concentration. Humans simply waste a lot of their potential trying to do and grasp multiple tasks at once.

  15. Klaus: one thing you have to bear in mind when talking about multitasking is that a “task” means different things depending whether you are taking about people or computers. Sure, people are poor at multitasking (despite what they may think), but the problem is that even when a human is undertaking just a single task, they may require more than one “computer task” to achieve it. Think of writing an article; a word processor or similar for what you are writing, and a web browser or similar for reference material used in writing the article.

    The single-task approach of the iPad is great in the majority of situations but quickly falls to pieces in those edge cases where one human task spans what would, on a desktop, be multiple apps. You end up with app developers having to effectively build in email clients and web browsers to their apps, each one behaving slightly differently and its nuances having to be learned.

    And even then, you can get into sticky situations, even with the simplest tasks. If want to email my wife a link to something I’ve seen in my RSS feeds, a “multitasking” interface will let me open my email and RSS reader side by side, drag the link across, then write an email *while looking at the article* saying “The most interesting bit is under the picture with the blue truck, when he talks about sprockets and widgets”.

    In a “single-tasking” interface, that task has more cognitive load; I can tap a button and bring up an email sheet which probably (depending on the app) already has a link to the article, but I have to remember “blue truck” and “widgets” and “sprockets” while typing in her email address and starting the email. This may not seem like much, but the cognitive load will increase dramatically as the detail of the discussion increases. I know I have had times that I have had to copy what I am writing to the clipboard, back out to whatever I was responding to, remind myself, go back in and paste everything back (or, in the case of an email, save as draft, back out, then switch to my email client to complete the email).

    That said, I think in the majority of cases, the single-app focus of the iPad and similar is an improvement over the multiple-app focus of the traditional desktop. The edge cases are those where a single human task maps across several applications; for many people this happens rarely or never, but for a significant number this happens with almost everything they do.

    Whether application design will change to better accommodate this, I don’t know. Adding email sheets and “post to Twitter” buttons is a step in that direction, but it’s a small and not very well executed step.

  16. Case in point, I tried to post that from the web browser in my iPad’s RSS reader and it failed. I had to copy-paste the comment and switch to Safari to get it to work. One human task, multiple apps; the iPad UI failed for me at that point.

  17. Klaus, that’s because you’re obviously a guy. Most guys will never understand multitasking, and (unlike those to whom this article refers) I don’t think they’re die out eventually. They’ll just rely on the women to do more than one thing at a time.

  18. Hamranhansenhansen 2011-04-19 at 17:01 Reply

    > Stacked windows may not be good for iPhones or iPads, but large
    > screens have their place too and they’re not going away. Using up
    > an entire 27″ display for one program or file is wasteful for most
    > apps, and it’s ugly too!

    A window is just a way to get one screen to pretend it is 4 screens. If you have 4 screens, you stop needing windows. Once you have 10 screens you don’t have the time to play with windows anymore. You might have one screen showing a classic windows-based Desktop, and one screen showing a widget Dashboard, and one screen showing a UNIX Terminal if you are extremely cantankerous, but your many other screens will be doing one thing and doing it really, really well, whether it is showing a video or editing video. People routinely lose a quarter of their productivity from reusing the screen. It’s a bug.

    It’s like I was talking to a guy who complained iPad was no good because he always has 2 windows open on his PC at a time, one where he is viewing something and one where he is editing. I said “get 2 iPads.” Or do your viewing on iPad and your editing full-screen on your PC. The iPad adds screens, it doesn’t take any away. You still have a Mac or PC. iPad not only plugs into TV’s, it can beam video streams over Wi-Fi to TV’s. You get more displays, not less. Many people who have an iPad and an iPhone with them are doing more multitasking than they used to do when they were carrying a notebook and a dumb phone.

    Anyway, windows are going the way of the command line.

  19. travisgamedev 2011-04-19 at 19:00 Reply

    Humans cannot truly multitask. It’s a deceptive task we keep telling ourselves we can do. “It’s OK if I text and drive. I can multitask.” “Hey, Facebook-ing while I’m supposed to be watching the kids at the pool is OK, because I can multitask.” The truth is that all studies on human multitasking show that it is a fallacy in that humans cannot do two tasks well at the same time, not to mention multiple tasks. How many of you have worked on a project and realized the reason you couldn’t complete it had to do with email, instant messaging, people walking up to talk, etc? All these windows vying for our attention when we need to be doing something else. It’s modern day daydreaming. We have looked out windows for 1000’s of years instead of doing what we need to get done. Now the computer windows are our enemy.

  20. I’m sorry, this has become long a lengthy response to a comment and doesn’t really have anything to do with the article — which I really enjoyed reading!

    @Klaus Busse I think you’re missing the point when you say “If you look at [...] the full screen representation of professional programs, which rather seem to gain track than loosing it, then I see a clear trend to get our focus back to the task at hand.”

    _What_ are the “professional applications” you are talking about and _how_ or, better, _why_ do they profit from going full-screen?

    Final Cut, Logic, Pro Tools, In Design and similar applications do of course profit from being run at full-screen on huge monitors. But this isn’t because they are “professional applications”, it’s because the tasks at hand get done better when you have a lot of space for them:
    When you have a ginormous audio-composition to tackle, there is probably little use in having a browser, a spreadsheet and what not visible as well. Instead you need to have the overview of those oh-so-many tracks and possibly at least a few filter-parameters here and there.
    Same goes for movie editing or complex layouts.

    But are these _all_ forms of professional application use?

    What about a word processor?
    For _creative_ writing, a distraction free environment is certainly something useful. But this is far from the only form of professional writing!
    In fact, I’d argue that it only accounts for a tiny minority of _all_ professional writing, as there still are reviewing, summarizing, reporting, etc. And in most of those cases, you’ll definitely benefit from having your primary sources or scribbles at hand while you’re writing — _no matter what media they are_.

    What about programming, then?
    Personally, I have at the very least one piece of reference open, while I do.
    Oh and then of course the application I’m developing in the code/build/debug cycle.
    If you’re doing mobile development, then there’s an additional simulator. If you’re doing front-end web-dev, you probably have at the very least one browser open (when ironing out kinks, it will be more like two or three, one of them in a VM…).

    And: Each of these last scenarios _are_ one task. They are just complex ones, that require meta-information.
    Should I need to switch apps to glance into a technote/an answer on stackoverflow, would that _improve_ productivity?
    I strongly doubt it!

  21. stsk, I found the quote on YouTube. It’s something Steve Jobs himself reported during his interview at AllThingsD with Walt Mossberg. It’s slightly different, but close enough. Apocryph no longer! :-)

Share your thoughts