For the final project of the Core Interaction Studio course I teach at Parsons, I challenged my screenaged design students to re-imagine the textbook for the digital, networked age.
Here are some questions/observations that came out of the 2 month-long process:
How does the next generation value print? If I can extrapolate, print is not dead or dying to these 19-20 year olds. It’s just less of a commodity and more a luxury. For the most part they are sensitive to killing trees and at the same time intuitively aware of the power of print. Also, they see online publishing for what it is: fast, cheap, and hard to control.
Should textbooks be apps or will they be housed in a standalone device? Or a web service? Half of the students cast their lot with an existing platform (be it the iPad or the Web), and the other half felt the need to create their own, coincidentally reflecting the current e-reader market dynamic (Kindle/Nook vs Apple). (Strangely, Android didn’t even register.) Incidentally, 3 out of 18 students came with their own iPads, up from 0 last semester.
One group suggested a cloud-based Netflix-like subscription service called ShelfLife, which, for a $9.99 monthly fee, gives you access to ostensibly any book you’d need for your classes, synced and delivered to any of your devices. While in my mind they didn’t put enough thought into the “Now what?” question (ie, the reading and studying experience) it does play out an interesting position, a thought experiment also taken up by Tim Carmody in his post for kottke.org, A Budget for Babel. How much would you pay for digital access to every book ever published?
Transposing the Netflix UI to books makes some sense on the surface (access to a huge catalog of content) but the consumption of books is a far different animal than movies.
Prototype website for ShelfLife, a book subscription service
The curious appeal of dual screens
Long live the Courier! I’ll admit, I was a big fan of Microsoft’s vaporware concept tablet, but the prevailing form factor these days seems to be a single screen tablet. That didn’t stop The Owl team from making an intriguing argument for a two-screen, wood-paneled device.
The spine hinges so it can fold into single screen, one-handed mode
If you see a stylus, they blew it. Or did they?
One of our guest critics, David Brown, Editor at Melcher Media who was responsible for bringing the highly acclaimed Our Choice iOS app to fruition (oh, and also the dead tree book too), raised a key point about how we put content in being just as important as the quality or experience of getting content out. This is especially true of textbooks, which expect to be marked up and highlighted by their owners. And to that point we get into the argument still being had about stylus versus the good ol’ finger. Of the three iPad owners in the class, two owned styli, which went against Steve Jobs’ famous aphorism, who, when asked about other tablet platforms, remarked: “If you see a stylus, they blew it”. My theory is that in order to do any detail work (like drawing or adjusting Bezier splines) you need something with more precision than your fat 30px finger. The other argument for a stylus is more psychological, and it has to do with how note-taking and doodling engages our memory. The Owl Team included a stylus (in addition to responding to touch) which was thoughtfully chiseled to provide two different surfaces for mark-making — the sharp point for detail and a rounded long edge for highlighting. Nice touch.
Open or closed?
There was some debate as to whether the textbook of tomorrow should have internet access via a full browser. The Closed camp argued that the world is already too distracting and including access to the web would inevitably lead to a social media death spiral and no homework would get done. Who knew Generation Next was fully cognizant of how Twitter and Facebook are making them stupid? Does that mean these kids are actually more media savvy than the Executive Editor of the New York Times?
The Open camp argued the Web is ubiquitous anyways, to the point where it’s natural to want to Google a word or phrase you don’t know (instead of hoping that your built-in dictionary is any good) and not having that ability built in would be a huge omission. Google (and by extension, the Internet) has become a necessary context for information consumption.
Of course, once you open the floodgates to the entire Web, you will have to tolerate students checking their Facebook feeds in class (which I have actually tolerated grudgingly). I can only comfort myself by thinking this will prepare them for, say, all of the liveblogging they will need to do in the future.
A final thought about accessibility
If it feels wrong to expect students to pay $1000 a year in order to just participate in class, how fair will it be to require everyone to buy a Kindle/iPad/Owl? Though if Kevin Kelly is right, it’s not inconceivable that every student could be issued a Kindle for free at the beginning of the year. Chances are it’ll have ads, but hopefully they’ll just be ads for other books (and not, say, soda).
Under the Accessible Education Act of 2011, The Owl became the primary device available to students and teachers through federal funding. At the beginning of the school year, unique identification codes are distributed to schools and/or individual users, allowing them to place their orders for The Owl as a group, or as a single user.
(NB: There is much more to see and talk about from my class’ final projects. I just don’t have the time or room here. Thanks to Charis Poon, Zeke Shore and Lev Kanter of Type/Code, David Brown of Melcher Media for generously serving as last-minute guest critics.)
Back in the heyday of the 90s tech bubble, I was the Design Technologist for FEED Magazine. When I look back on it, FEED was a wonderful place to be at that time because FEED took it upon itself to look critically at media and culture (a digital New Yorker, if you will, at a time when the New Yorker didn’t even have a website). Plus, the writers and thinkers that made up FEED were an incredible bunch of visionaries and future stars and ended up churning out a great number of amazing books. Steven Johnson (Editor in Chief) alone has cranked out 7.
Anyways, Freyja Ballmer, who sort of inherited my position at FEED after I left, asked me to talk at the first Zeitgeist NY Panel this past October. (Zeitgeist is a think tank/social club for digital people.) I gave a little talk on the future of reading. You can read more about what happened here.
BTW, this is Dan Paluska’s (one of the evening’s other speakers) stop motion video of that day and the event, which comprises the latter half of his day (I go on about 1:37 to 1:42):
The demos are getting more and more realistic (first Mag+, now this). My initial reax:
It took a whole team from Adobe partnering with the staff of Wired Magazine to do this demo. The tools aren’t there yet, but publishers, your newsrooms and staff need to look and work like this. Right now, I don’t think they’re geared for this type of production. Yet.
The question is what hardware is it running on? Can it run on any old tablet that supports multi-touch (not an iPad)? Who makes such a device?
Does the video have to play full-screen or can it play inline?
Can you resize the text? I don’t think you can. To me, it looks like they’re taking the assets from the InDesign layout and converting to Flash/Flex/AIR.
HTML5 and CSS3+ Javascript can’t necessarily pull this off yet. Another big “YET”. jQTouch, I’m waiting…
I foresee an app per publication, not per issue, which means for books, you’ll buy the Penguin app, and the Knopf app, and pay a subscription fee to get particular chapters of books. In terms of magazines, you’ll buy the Wired app and the GOOD app, e.g., and pay a subscription fee to get particular issues. The question is sharing…
UPDATE: from Wired itself:
The content was created in Adobe InDesign, as is the case for the print magazine, with the same designers adding interactive elements, from photo galleries and video to animations, along with adapting the designs so it looks great in both portrait and landscape orientation. This is a departure from the usual web model, where a different team repurposes magazine content into HTML, unavoidably losing much of the visual context in the process. Wired.com is not a re-purposed version of the magazine, but rather an separately-produced news service.
Read More http://www.wired.com/epicenter/2010/02/the-wired-ipad-app-a-video-demonstration/#ixzz0fjhurJU8
During the iPad Keynote last week, I was watching about 5 different windows (a live webcast plus refreshing a few liveblogs) and Twittering away my snarky reactions to the garbled, half-heard things that were coming in over the wire, and looking back on it all, I realize now that most of what had seemed monumental or outrageous at the time were just simply my misunderstandings in the heat of the moment. Twitter will do that. And record it for posterity.
Now that I’ve had a week to digest and think about all of the excellent commentary coming from @gruber and others (@VenessaMiemis has an amazing roundup of the hubbub) I’m finally coming to a few conclusions (yes, I know, about a thing I haven’t even used with my own bare hands). There must be a word for “extensive speculation about a device that has not yet hit the market.”
When I said “portable” what I really meant was…
Design problem: Imagine a portable computing device. No, really, that’s it. Okay, let me put it another way: imagine a portable computing device that doesn’t require you to carry around all the other junk that comes standard with computers these days: ie, a power supply, a mouse, a keyboard, etc. What do you have left?
Assumption: All computers have keyboards. False. Don’t get me wrong. I loves me some command line once in awhile. I, like most of my generation and younger, grew up learning the contortions of the QWERTY keyboard and can now type approximately as fast as I can think. Which is to say, I hit the backspace key alot. But, if you stop and think about it (which is what Apple is very good at doing) how often are you really using the keyboard to its fullest? I suspect you’re not always typing all 26 letters of the alphabet at all times. Maybe you’re using the spacebar frequently, or the arrow keys (if you’re playing games), or the numeric keypad when you have to type numbers into a spreadsheet. Okay, if you’re a writer, you use the alphabet a good majority of the time. But if you’re just browsing the web, you’re just clicking on links, looking at the screen, and clicking on some more links.
Which brings us to the mouse. The mouse is another super elegant engineering solution to the problem of computing. It brings us just a little closer to the machine. We’ve invented clever ways of orienting ourselves and creative control mechanisms that combine the mouse and the keyboard (I’m thinking first person shooters) but, again, if you step back and look at it, the mouse and the keyboard are very advanced kludges.
Humans are very good at adapting to their surroundings, from the arid deserts in Africa to the frozen tundra of the Arctic to the strange and awkward desktop metaphor (do people still use pencils?). We even lull ourselves into a feeling of comfort in these inhospitable environments — I think I read somewhere that upon smelling a foul stench, it only takes a person 7-8 minutes of constant exposure before s/he becomes acclimated to it and becomes unaware of the smell. No matter how terrible the interface is, it just has to be good enough. (See Windows95). I guess what I’m trying to say is: we’ve gotten complacent and comfortable with the keyboard and mouse as “the way we interact with computers.” I have to admire how Jobs and Apple are quietly picking us up and dropping us ever so gently to a new paradigm for computing: touch.
I don’t have any data to back this up, but I’d be willing to bet somewhere in the research archives of Apple there are analyses of how much time an average user touches the keyboard, touches the mouse, uses the chrome, as well as which applications are most often used. My speculation is that the iPad finds its justification in many decisions in this research, which probably shows that we use about 20% of the available functionalities of most programs.
Nowhere in the iPad presentation does it say “People won’t own desktop computers anymore.” The iPhone does not replace the MacBook. The MacBook does not replace the iMac. And the iPad does not replace anything. It fills the gap. So don’t jump to the conclusion, like many are doing, that multitouch will take over our desktop computing interfaces and we will mourn the passing of the keyboard and mouse. We’ll still have them where we need them (ie, on our desks).
The iPad seems to be a better solution for a different context of computing: let’s call it constructive leisure. You’re traveling, on a plane. What do you see most people with computers doing? Reading, watching movies, catching up on email. Occasionally you’ll see somebody squinting over a spreadsheet or a Word doc. When I walk by those people in the aisle I usually think to myself, are they really being productive, or are they spending most of their time just twiddling the interface, trying to get at something? It looks like what Apple’s done with the iPad (and the native apps built for it) is gotten rid of most of the interface and mapped the most common functions to basic touch gestures.
$499 is damn cheap. And I’d certainly pay more for a model with a camera. I’m sure, looking at how the iPod evolved over the years, there will certainly be fancier and more expensive models with videocameras and roomier hard drives. They just have to get people sold on the basic idea first.
When I showed the nook to my wife, the first thing she did was paw at the screen with her fingers. That is to say, her first impression of the thing was disappointment that the screen wasn’t responsive to touch. And kids, that ever-brutal focus group, to paraphrase Clay Shirky, kids won’t be “looking for the mouse“, they’re going to be smudging the screen — every screen — with their grubby fingers. Apple has, like it or not, begun a shift in our expectations of user interfaces. Jobs & Co. kept repeating “It just works” in their presentation. That’s the highest bar of UI design and not many hardware or software companies can go around repeating that with a straight face to their customers.
Interesting how a large tributary of discussion about Flash has begun to flow from the introduction of the iPad. (If you missed it, read this visual lament and Nick Bilton’s Why the iPad Web Demo Was Full of Holes.) It put (on a bigger screen) the growing movement that argues Flash is not “web-native” — ie, it’s proprietary and closed, which is an ironic argument for Apple to be making.
Speaking of which, is this thing Open (with a capital O and no scare quotes)? How do I make cool stuff for the iPad without paying for a $99 license? Answer: HTML5 + CSS3 + Javascript.
I thought it was very telling that the last slide Steve Jobs showed in his presentation was this slide:
At the intersection of Technology and Liberal Arts
I think it makes a great point, especially to all the techie naysayers who think “it’s just an over-sized iPhone” or “I went to the Apple special event and all I got was this lousy picture frame.” It’s hard enough to make a chip scream or do multi-threading or cram a camera into a thin chassis. What’s really really hard is to make the product usable, for the masses. And you can’t solve the problem of usability with engineers and math PhDs. You still need, for lack of a better word, “artists”, right-brained people to imagine and think about the intangible aspects, the experience, the “magic”.
Gizmodo leaked a video yesterday showing Microsoft’s Courier, a dual panel touch screen (+ stylus) computer that is drawing obvious comparisons to the currently non-existent Apple iTablet. I mean there is an unhealthy amount of speculation going on trying to guess at what Apple’s next move will be and the Courier provides, if anything, a brief distraction from that. But it does raise some interesting UI points that Apple may already be considering. Not one screen, but two.
Photo: Gizmodo
Perhaps the folks at Pioneer Labs (a team within Microsoft’s Entertainment and Devices unit) were peeking at recent Apple’s patent filing (deliberately dorky drawings which are half-wireframe, half-police sketch) which many have speculated represents their design for the iTablet:
Notice the dual screen concept. Just for perspective, this was their original patent filing for the iPhone, next to the actual thing. Credit: Unwired Review
So it is in this context that we must remember that Courier is just a concept piece wrapped in an enigma shrouded in vapor. There has been no release date, no evidence of production, and for all we know, it could cost $12,500 retail. (What, did somebody say Surface?)
In any case, what is noteworthy about Courier is that it makes a really good case for the dual screen environment as optimal for reading, and by extension, for e-books and e-readers. The Nintendo DS has been doing this for some time.
Of course, we had to start somewhere, and right now the Kindle and the iPhone are, well, “somewhere”. The iPhone is admirably performing many tasks, one of which is displaying books and reading material, which it does somewhat grudgingly. I have two hesitations reading on my iPhone: I can’t really read while I’m fearing for my battery life, and the screen size is just too small to be luxurious. I want reading books and long articles to be a pleasure and not simply super-convenient. The single screen iPhone today as a reading platform just isn’t going to do to the book publishing industry the same thing the iPod did to the music industry.
And the Kindle, well, every time I look at it I think of my first Palm or my first iPod (both of which I still have): dull, monochromatic, and awfully low res.
We Are Not Cyclops
When it comes to screens, more, in this case, really is better. I recently upgraded my home office setup to include a second monitor, and by golly, the doubling of screen real estate, while not exactly doubling my productivity (studies show a 20-30% increase in productivity, whatever that means), sure gives me a lot more breathing room and I find myself spending less time moving windows out of the way and manipulating the furniture, and more time doing what I’m doing.
But it’s not just about sheer number of pixels at your disposal. I think there’s something psychologically useful about the actual separation of the total area into two regions, left and right, that allows us to take advantage of peripheral, ambient information. It’s almost like the second screen is a big margin that gives the main focal area a needed dose of breathing room.
Maneesh Agrwawala, a “Computer Vision Technologist” who just won a 2009 MacArthur Genius Grant, elaborated on the benefits of a dual screen approach in a 2008 paper (and video) called “Navigation Techniques for Dual-Display E-Book Readers”. (Credit on the paper also goes to Nicholas Chen, François Guimbretière, Morgan Dixon, Cassandra Lewis.) They made the observation that when we read (especially longer pieces) we tend to flip back to a previous page to refer back to a character or name or concept from before, and I think the visual memory of where that information was positioned on the visual field is helpful in recall. Which is why two opposing screens that are framed separately (rather than simply dividing one monolithic screen) makes cognitive sense to our modern brains, which have grown up in a culture of the codex. The “page” (or, as I like to call it, the “unit of reading”) that we just read is a convenient glance away if we need to refer back to it, instead of completely out of our field of vision in the ether somewhere requiring me to find some button or other to get it back. One monolithic screen = bad
But my biggest argument for a dual display e-reader is really an intimacy issue. Having two screens allows you to shield what you’re reading from the person sitting next to you, and helps bring the act of reading back to a private experience, where you can cuddle up and luxuriate with the words, just you and the author, and get lost in another world.