This post was imported from my old Drupal blog. To see the full thing, including comments, it's best to visit the Internet Archive.

My dad (Hello Barry!) has much more time to surf than I do, and often sends me interesting links. One that he sent recently was a video presentation of a multi-touch interface that’s really worth checking out. I thought I’d just write down a few of my thoughts on that and one of the talks at DocEng 2006 on interactive paper, since they’re both about how we might interact with computers in more intuitive ways in the future.

There’s apparently a multi-touch interface on the iPhone, but Jeff Han’s presentation on a huge multi-touch screen is really awe-inspiring. The obvious applications are when manipulating graphics: touch an image with two fingers, then move them apart to zoom, or around each other to rotate; touch an image with four fingers, and move them in concert to skew. Jeff Han shows this with both a graphics package and a 3D map similar to Google Earth.

But let’s be honest, most of the time we mark-up geeks spend on computers we spend reading and writing. What multi-touch gestures could we use on documents? Proper touch-screen keyboards: being able to actually hold down the Shift/Ctrl/Alt keys in combination rather than them acting as switches. Doing a copy operation by “holding down” highlighted text with one finger while you move it with another. Using a kind of vertical “pinch” to delete or hide lines, and a horizontal one to do the same for characters… There are things you could do, but I’m not sure they’d give a big enough usability win to drive multi-touch adoption. The main problem, I think, is screen real-estate: if you’ve got a laptop, the screen size is not that much bigger than a keyboard, so there’s hardly any room left to actually display what you’re writing. So my guess is that multi-touch will mainly be used on large displays – communal displays and (literally) desktops – and in the graphics/CAD world.

I owned a tablet for a while, and for the most part I used it as a normal laptop. The one task that I really found easier in tablet mode was reading and annotating documents. That’s because it gave the experience that was closest to paper: I could hold it close, page through easily, and highlight things I wanted to comment on, and make the comments themselves, using the pen. (This was several years ago now: at the time I couldn’t actually write on the document itself, now you get proper “ink annotations” in Word.) How are we going to read documents in the future?

There’s the long-term promise of flexible screens, but I learned about Anoto in one of the many interesting talks at DocEng 2006: Print-n-Link: Weaving the Paper Web by Moira C. Norrie, Beat Signer and Nadir Weibel. Anoto uses a pattern of dots arranged around a grid and printed (very faintly) on paper. The pens have a small camera that sees the dots and therefore knows where, on the page, they are. This information can be used by the pen itself or transferred to a computer. Mostly this is aimed at business types who can take hand-written notes during a meeting on a special Anoto notepad, and then transfer them to their computer later on. The sweetest application, though, is in the Fly Pentop computer, which is aimed at school children: you can do things like draw a calculator anywhere on your notepad and then use it to perform calculations by hitting the buttons with your pen. I really want one!

Norrie, Signer & Weibel presented a system for printing academic articles plus the Anoto dots, such that when you tap on a reference on a printed article with your Anoto pen, the referenced article comes up on your computer screen. Beat Signer controlled and annotated the presentation using the pen on a printed version of the slides. So cool.

The real innovation here, I think, was actually printing documents with the Anoto dots and having document-specific behaviour associated with them. It provides a way of getting annotations (mark-up!) on your document by having the smarts in a pen rather than a tablet PC. Of course the “display” can’t update (except for the fact that you’ve actually written on it), but you don’t need that when you’re reviewing documents. There’s more of Norrie, Signer & Weibel’s research on the GlobIS website but I haven’t had the time to look at it.

Know of any other funky user-interface innovations on the horizon? Tell me about them!