How To Do More Than One Thing At Once: On “Lifehacking”

With Clive Thompson‘s article in this weekend’s NYT Magazine on “Life Hackers”, a recent growing trend has reached a fever pitch: a kind of nerd revolution against nerd-made tools. As the article outlines, “Multi-tasking” became the expected mode of office work — all computer-based work, really — with the universal adoption of the PC and its subsequent networking. Once all your work and all your communication takes place through the same box, you end up constantly distracted, interupted, harried. Suddenly productivity geeks like Merlin Mann and Danny O’Brien find themselves trying to find ways to simplify, reduce, and focus, to turn off the metastasizing bundle of “multi-tasking” tools so they can, for godsake, actually get something done.

Part of the problem, it strikes me, is in the metaphor of “multi-tasking” itself. While the modern GUI is composed of an ever greater number of applications running simultaneously, the user ends up not using them all at once, but cycling rapidly through them one at a time conducting a series of different — and often unrelated and mutually disruptive — tasks. It might better be called “sequential tasking” since it’s only ever the computer that’s doing multiple things at once (keeping all those apps open in the background, patiently waiting for your attention to flit back over to them).

Now, after two years of working at a French dessert shop, I’m a pretty efficient waiter. You might say that when it comes to all the systems for prepping dessert and drinks at Pix Patisserie, I’m a power user, an alpha nerd. What makes me an efficient server isn’t that I move any faster than my co-workers, or even that I know where everything is better than they do. My real advantage is that I’ve figured out all the places where I can do more than one thing at once: while I’m waiting for a teapot to fill, I prep the mug teabag, and other drinks; while a chocolate cake is heating, I scoop the ice cream that goes with it; etc. I’ve learned that their are some taks I can comfortably overlap (checking in with the host while making coffe drinks, for example) and some I can’t (answering questions about prices while adding up tabs).

What it comes down to is if the tasks are different enough in mode (communication vs. dexterity) I can overlap them, but as soon as they get too similar (adding v. remembering numbers) I get confused and it takes me longer to do both tasks than if I’d taken them on in succession.

Similar limits take effect while working on the computer. I can listen critically to a podcast from IT Conversations or On The Media while working in Illustrator, but not while composing a Music For Dozens press release or reading 43 Folders. Contrarily, while writing or reading, I can listen to MFDZ tracks to screen out My Chemical Romance songs, but I can’t watch a movie or TV show in the background and still follow the plot.

Finding sets of tasks like these that overlap well is the greatest possible productivity win. Much better than more efficient modes “multi-tasking” that let you switch rapidly between incompatible activities in order to minimize the ill effects of interruptions. When you’ve got a good overlap going, it’s like your available time doubles. You are actually getting two sustained tasks done simultaneously.

The key to successful overlap sets is that each task utilize different sensory inputs and different modes of concentration: listening for recoginition of a song uses your ears and an automatic type of attention (when you hear a familiar song, you know it without having to do anything active).

Unfortunately, many of our tasks are stuck in one media type or another and so we’re stuck tackling it using a fixed sensory input. I can’t read email while I’m working in Illustrator, not because my brain couldn’t handle it, but because both of those taks want to use my eyes as their input paths. We don’t just need bigger screens, as the Thompson’s article seems to suggest, we need ways of translating our taks away from our eyes, more information chanelled through auditory, and even haptic outputs. Here are some wild ideas for accomplishing this:

  • automated email reading using good voice synthesis: When I’m using Illustrator and I get a new email, my mail client should know to read its content aloud without Mail ever having to switch into the forground or even display anything on the screen. (For this — and some of my other ideas here — better voice synthesis than at least I’ve heard would probably be a necessity, or at least a great luxury).
  • haptic alerts: My chair pokes me or my bluetooth cell phone vibrates when a process completes and my computer knows that I’m in the middle of reading a web page. In order to never have to break my task overlap, I should also have a button or key-combo I can hit to trigget an obvious next action to follow the alert (for example, opening a disk image that just completed downloading or playing some audio files that just finished ripping).
  • smart web page readers: A tool that can read the actual content of a web page while ignoring the ads and other navigation. Maybe this exists already (it seems like it must for accessibility) but I’ve never seen it packaged as a productivity app. Again, this requires really good voice synthesis that I don’t have to struggle to understand and that won’t drive me batty over a long article. Another way of accomplishing this would be for more content providers to offer audio versions of their articles. In the world of podcasting, this seems desirable under its own merits anyhow.
  • tools that make it easy to switch seamlessly between modes: If I start reading a long NY Times in front of my computer and then I have to go downstairs to get and fold my laundry or if I have to go out to run my errangs, I should be able to switch over to an audio version (either speech generation or a provided human-read mp3) at exactly the place I left off. And then I should be able to switch back to reading when I return to the computer so that I can simultaneously listen to music.

(If anyone can think of any other wild ideas like these, or ways of accomplishing some of what I’m dreaming of I would love to hear about it in the comments.)

Technorati Tags: , , , , ,

This entry was posted in useful web. Bookmark the permalink.

0 Responses to How To Do More Than One Thing At Once: On “Lifehacking”

  1. Chris says:

    One way to implement this at the model-controller level would be to have an operating system notion of a “next action” this way, your haptic alert, or your growl pop-up, or your spoken alert, or whatever, could have a unitary method for continuing the default action without too much user intervention. Developers could provide default next actions, and give users the option of specifying others.
    With OS hooks like this, you could then build as many different ways to interface it – Quicksilver, etc. – as you wanted.

  2. Mr. Beardo says:

    The multi-tasking debate (if it is one) reminds me, in many ways, of the 1950s kitchen. Studies of the period have determined that all the wonderful new appliances, which ought to have saved time, actually increased time in the kitchen.
    Modern feminist critiques also point out the way women were simultaneously “updated” (via technology) and rendered obsolete (they had nothing to do but stare at their kitchen appliances).
    Here is a link to a very thorough article, analyzing the political implications of the 1950s kitchen from several angles:
    http://www.americanpopularculture.com/journal/articles/fall_2004/hellman.htm
    The kitchen analogy may be a stretch, but I still feel the weight of class oppression (and the flat, gray panels of a cubicle) whenever I hear talk of new computer multi-tasking programs.

  3. Laura says:

    I would love to be able to be reading a print book, then, when I get into my car to drive somewhere, be able to play an audio version of that same book. (I think publishers should start including audio files with every print book and stop ripping us off so much for audio books. CDs are so cheap to reproduce; I know it’s not free to produce audio books, but come on.) I’d want the ability to indicate at what “page” to start listening. Then, when I arrived at my destination, I would need the audio book to tell me at what page of the print book I could start reading.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>