We recently created an interactive map for the James Mintz Group, an international investigative services firm, that shows where cases related to the Foreign Corrupt Practices Act have been settled. The map shows multiple dimensions: the country, the estimated amount of the penalty for that country, which sector the case belongs to. This data is displayed on rollover using a treemap which shows the relative sizes of the penalties for the country in question:
There is also a sector filter and each box in the treemap opens up the actual materials pertaining to each case.
Also, for some background on the FCPA, Frontline did a show on bribery and the history of the FCPA back in 2009.
UPDATE: The Wall Street Journal law blog has a post on the map here.
We recently collaborated with Kiss Me I’m Polish (with awesome illustrations by Papercut) to do a slide presentation and a series of infographics for the Knight News Challenge’s Interim Review of their 2007-8 grant winners. Here’s the Slideshare presentation:
Here’s my favorite slide from the deck, The Hacker Journalist (or Journalist 2.0), based loosely on Brian Boyer, a 2008 winner as well as a repeat winner this year.
For the final project of the Core Interaction Studio course I teach at Parsons, I challenged my screenaged design students to re-imagine the textbook for the digital, networked age.
Here are some questions/observations that came out of the 2 month-long process:
How does the next generation value print?
If I can extrapolate, print is not dead or dying to these 19-20 year olds. It’s just less of a commodity and more a luxury. For the most part they are sensitive to killing trees and at the same time intuitively aware of the power of print. Also, they see online publishing for what it is: fast, cheap, and hard to control.
Should textbooks be apps or will they be housed in a standalone device? Or a web service?
Half of the students cast their lot with an existing platform (be it the iPad or the Web), and the other half felt the need to create their own, coincidentally reflecting the current e-reader market dynamic (Kindle/Nook vs Apple). (Strangely, Android didn’t even register.) Incidentally, 3 out of 18 students came with their own iPads, up from 0 last semester.
One group suggested a cloud-based Netflix-like subscription service called ShelfLife, which, for a $9.99 monthly fee, gives you access to ostensibly any book you’d need for your classes, synced and delivered to any of your devices. While in my mind they didn’t put enough thought into the “Now what?” question (ie, the reading and studying experience) it does play out an interesting position, a thought experiment also taken up by Tim Carmody in his post for kottke.org, A Budget for Babel. How much would you pay for digital access to every book ever published?
Transposing the Netflix UI to books makes some sense on the surface (access to a huge catalog of content) but the consumption of books is a far different animal than movies.
The curious appeal of dual screens
Long live the Courier! I’ll admit, I was a big fan of Microsoft’s vaporware concept tablet, but the prevailing form factor these days seems to be a single screen tablet. That didn’t stop The Owl team from making an intriguing argument for a two-screen, wood-paneled device.
If you see a stylus, they blew it. Or did they?
One of our guest critics, David Brown, Editor at Melcher Media who was responsible for bringing the highly acclaimed Our Choice iOS app to fruition (oh, and also the dead tree book too), raised a key point about how we put content in being just as important as the quality or experience of getting content out. This is especially true of textbooks, which expect to be marked up and highlighted by their owners. And to that point we get into the argument still being had about stylus versus the good ol’ finger. Of the three iPad owners in the class, two owned styli, which went against Steve Jobs’ famous aphorism, who, when asked about other tablet platforms, remarked: “If you see a stylus, they blew it”. My theory is that in order to do any detail work (like drawing or adjusting Bezier splines) you need something with more precision than your fat 30px finger. The other argument for a stylus is more psychological, and it has to do with how note-taking and doodling engages our memory. The Owl Team included a stylus (in addition to responding to touch) which was thoughtfully chiseled to provide two different surfaces for mark-making — the sharp point for detail and a rounded long edge for highlighting. Nice touch.
Open or closed?
There was some debate as to whether the textbook of tomorrow should have internet access via a full browser. The Closed camp argued that the world is already too distracting and including access to the web would inevitably lead to a social media death spiral and no homework would get done. Who knew Generation Next was fully cognizant of how Twitter and Facebook are making them stupid? Does that mean these kids are actually more media savvy than the Executive Editor of the New York Times?
The Open camp argued the Web is ubiquitous anyways, to the point where it’s natural to want to Google a word or phrase you don’t know (instead of hoping that your built-in dictionary is any good) and not having that ability built in would be a huge omission. Google (and by extension, the Internet) has become a necessary context for information consumption.
Of course, once you open the floodgates to the entire Web, you will have to tolerate students checking their Facebook feeds in class (which I have actually tolerated grudgingly). I can only comfort myself by thinking this will prepare them for, say, all of the liveblogging they will need to do in the future.
A final thought about accessibility
If it feels wrong to expect students to pay $1000 a year in order to just participate in class, how fair will it be to require everyone to buy a Kindle/iPad/Owl? Though if Kevin Kelly is right, it’s not inconceivable that every student could be issued a Kindle for free at the beginning of the year. Chances are it’ll have ads, but hopefully they’ll just be ads for other books (and not, say, soda).
There’s also a clever piece of SciFi in the Owl group’s website that addresses this issue:
How can I order The Owl?
Under the Accessible Education Act of 2011, The Owl became the primary device available to students and teachers through federal funding. At the beginning of the school year, unique identification codes are distributed to schools and/or individual users, allowing them to place their orders for The Owl as a group, or as a single user.
(NB: There is much more to see and talk about from my class’ final projects. I just don’t have the time or room here. Thanks to Charis Poon, Zeke Shore and Lev Kanter of Type/Code, David Brown of Melcher Media for generously serving as last-minute guest critics.)
- Mashable on tablets in education
- Cathy Marshall has written and researched extensively the ways we read and collaborate in the Digital Age
- Turns out Kindles aren’t all that popular on campuses
- Our class Tumblr
From 5:19pm until 7:30pm
…and 7:30 until 11:18pm.
The memorable bits: bacon maple cupcakes from kumquat cupacakery, visits by all our great friends and colleagues, pork sliders, chile spiced dried mangoes from Trader Joe’s, Zubrowka (something about bison and vodka from Poland), the whiteboard wall, and the game of “Recreate the Photoshop Tools Palette From Memory While Drunk” game that finished off the night.
Back in the heyday of the 90s tech bubble, I was the Design Technologist for FEED Magazine. When I look back on it, FEED was a wonderful place to be at that time because FEED took it upon itself to look critically at media and culture (a digital New Yorker, if you will, at a time when the New Yorker didn’t even have a website). Plus, the writers and thinkers that made up FEED were an incredible bunch of visionaries and future stars and ended up churning out a great number of amazing books. Steven Johnson (Editor in Chief) alone has cranked out 7.
Anyways, Freyja Ballmer, who sort of inherited my position at FEED after I left, asked me to talk at the first Zeitgeist NY Panel this past October. (Zeitgeist is a think tank/social club for digital people.) I gave a little talk on the future of reading. You can read more about what happened here.
BTW, this is Dan Paluska’s (one of the evening’s other speakers) stop motion video of that day and the event, which comprises the latter half of his day (I go on about 1:37 to 1:42):
That’s right! We’re moving again, this time, we’re sharing a space with Agnieszka Gasparska of Kiss Me I’m Polish, where we’ll be collaborating and doing more great things together! Stay tuned!
In the meantime, update your address books:
151 W.19th St, 5th Fl
New York, NY 10011
View Larger Map
What would a “free market correction” feel like? What if the U.S. Government had allowed AIG, JPMorgan, Bank of America, and Citigroup to collapse? What would have happened if Treasury had chosen not to prop up the nine largest American banks on October 14, 2008? What news reports would have trickled out over Twitter, CNN, CNBC, Fox News? What emails and text messages would we have received from our banks? What would the impact have felt like over a 48 hour period? Subscribers would experience this alternate reality as filtered through Twitter streams, emails, text messages, photographs, video, and podcasts.
Suggestions and ideas are welcome.
Disclaimers: I don’t have an iPad yet (I ordered the 3G version), and I’m not officially an iPhone developer in that I haven’t made or sold any apps yet. This does not, however, keep me from posting my opinions on both from the perspective of someone who is looking to create and design apps for the iPhone and iPad.
What happened? Why did Apple shut this route to app-ification down?
My take on it is that now that Apple’s gotten deep into the mobile computing business, it’s started to care a lot more about how applications are written, because how they’re written seriously affects how well and efficiently they run. And mobile computing is all about ruthless efficiency, like engineering for space travel. If your application (and then your phone) grinds to a halt because of some unidentifiable memory leak (I’m using “memory leak” here as a euphemism for “poorly written code that makes a program run sub-optimally”), and you need to make an emergency phone call, you’re ditching your iPhone right then and there.
The same “memory leak” might be happening right now on an application you’re running on your desktop (either a Flash SWF in your browser or some other desktop application) but because of the huge RAM and memory sizes of most desktop computers, these leaks take a lot longer to become noticeable. (I’m talking out of my ass here but correct me if I’m wrong.)
Another way to look at it is that when you use a “meta-framework” to author in Flash/Flex and export to iPhone, you’re depending on Adobe to create the proper hooks to tie into Apple’s Touch APIs, and the assumption with these meta-frameworks is that you don’t really care what’s going on under the Apple hood; you just want to “drive the car” so to speak. What Apple is saying, to put it bluntly, is “You need to care about what’s going on under the hood if you want to make apps for the iPhone/iPad.”
As @gruber points out, “We’re still in the early days of the transition from the PC era to the mobile era,” and it looks like Apple is taking this opportunity to enforce standards and best practices on their mobile devices by a) making app developers use their tools and APIs and b) strict gatekeeping during the app submission process before releasing them to the App store. It’s tough love, and some people are going to get hurt. Some people will have their feelings hurt (Flash developers), and others are going to have their businesses hurt (MonoTouch, Adobe, and it’s also looking like Flurry/PinchMedia and other analytics companies are going to get shut out).
Big Questions: What about Google/Android? Will Apple’s benevolent dictatorship win out over a messy but open democracy?
For more reading on this subject, check these excellent articles:
- Apple’s prohibition of Flash-built apps in iPhone 4.0 related to multitasking from AppleInsider
- Daring Fireball on Why Apple changed Section 331
- Steven Johnson’s column on How Apple has rethought a gospel of the web
The demos are getting more and more realistic (first Mag+, now this). My initial reax:
- It took a whole team from Adobe partnering with the staff of Wired Magazine to do this demo. The tools aren’t there yet, but publishers, your newsrooms and staff need to look and work like this. Right now, I don’t think they’re geared for this type of production. Yet.
- The question is what hardware is it running on? Can it run on any old tablet that supports multi-touch (not an iPad)? Who makes such a device?
- Does the video have to play full-screen or can it play inline?
- Can you resize the text? I don’t think you can. To me, it looks like they’re taking the assets from the InDesign layout and converting to Flash/Flex/AIR.
- I foresee an app per publication, not per issue, which means for books, you’ll buy the Penguin app, and the Knopf app, and pay a subscription fee to get particular chapters of books. In terms of magazines, you’ll buy the Wired app and the GOOD app, e.g., and pay a subscription fee to get particular issues. The question is sharing…
UPDATE: from Wired itself:
The content was created in Adobe InDesign, as is the case for the print magazine, with the same designers adding interactive elements, from photo galleries and video to animations, along with adapting the designs so it looks great in both portrait and landscape orientation. This is a departure from the usual web model, where a different team repurposes magazine content into HTML, unavoidably losing much of the visual context in the process. Wired.com is not a re-purposed version of the magazine, but rather an separately-produced news service.
Read More http://www.wired.com/epicenter/2010/02/the-wired-ipad-app-a-video-demonstration/#ixzz0fjhurJU8
(Putting nose back to grindstone…)
Finally got Buzz activated on my Google account yesterday and wanted to drop a quick post about it, because I’m seeing all this “Ooooh Google’s coming after Facebook and Twitter!” posts and I think there’s a larger point being missed with all this. The implications of Buzz are farther reaching than just these two services (which encompasses a wide swath of social networking activity already) — Google has already eaten everybody’s lunch. Here are two major points:
- It’s about the mobile, stupid! Try using Buzz on m.google.com/app/buzz and you’ll see exactly what I’m talking about. On the plus side, you’ll notice, because it has your location, it recommends places around you, shows them to you on the “Buzz map”, nibbling on Yelp and Foursquare’s lunches. On the minus side, it does the same thing with people, ie, it shows you “public” conversations of people right around the corner in your neighborhood. This is one of those times where engineering making interaction design decisions turns out not to be such a great idea.
Engineer: Wouldn’t it be awesome to be able to see what “buzz” is being generated in your vicinity? We can do that since we know all the Gmail users in the area! Sweet! We are SO smart!!!
Unfortunately, what this “buzz” amounts to is some seriously embarassing (and unwitting) privacy violation:
I have no idea who Brent is, except now I gather he is somewhere in my zip code and I probably brush by him at the grocery store, and now I know he’s going to have a HOT date night with his friend, Kate, who will be wearing her black stilettos and swilling Hennessy! Gross.
- Asymmetrical following: yes this is Twitter’s lunch. The difference is it’s a less relevant given that a) the following isn’t voluntary per se since Google takes the liberty of adding people you supposedly send lots of Gmail to and then b) shows you the conversations they are having with other people (most of whom are perfect strangers). This is weird and creepy, far from the interestingness you get from Twitter’s opt-in asymmetrical following model. (I’ve seen some inane, offensive, childish, and incomprehensible bickering that I really didn’t need to see.)
They know where you live, they know where you are right now, they know what you’re talking about, they know what you’re thinking, they know your all dirty little secrets. And if you’re not careful, you’re publishing everything for the world to see, all in the name of “sharing”! Happy Buzzing!
During the iPad Keynote last week, I was watching about 5 different windows (a live webcast plus refreshing a few liveblogs) and Twittering away my snarky reactions to the garbled, half-heard things that were coming in over the wire, and looking back on it all, I realize now that most of what had seemed monumental or outrageous at the time were just simply my misunderstandings in the heat of the moment. Twitter will do that. And record it for posterity.
Now that I’ve had a week to digest and think about all of the excellent commentary coming from @gruber and others (@VenessaMiemis has an amazing roundup of the hubbub) I’m finally coming to a few conclusions (yes, I know, about a thing I haven’t even used with my own bare hands). There must be a word for “extensive speculation about a device that has not yet hit the market.”
- Design problem: Imagine a portable computing device. No, really, that’s it. Okay, let me put it another way: imagine a portable computing device that doesn’t require you to carry around all the other junk that comes standard with computers these days: ie, a power supply, a mouse, a keyboard, etc. What do you have left?
- Assumption: All computers have keyboards. False. Don’t get me wrong. I loves me some command line once in awhile. I, like most of my generation and younger, grew up learning the contortions of the QWERTY keyboard and can now type approximately as fast as I can think. Which is to say, I hit the backspace key alot. But, if you stop and think about it (which is what Apple is very good at doing) how often are you really using the keyboard to its fullest? I suspect you’re not always typing all 26 letters of the alphabet at all times. Maybe you’re using the spacebar frequently, or the arrow keys (if you’re playing games), or the numeric keypad when you have to type numbers into a spreadsheet. Okay, if you’re a writer, you use the alphabet a good majority of the time. But if you’re just browsing the web, you’re just clicking on links, looking at the screen, and clicking on some more links.
- Which brings us to the mouse. The mouse is another super elegant engineering solution to the problem of computing. It brings us just a little closer to the machine. We’ve invented clever ways of orienting ourselves and creative control mechanisms that combine the mouse and the keyboard (I’m thinking first person shooters) but, again, if you step back and look at it, the mouse and the keyboard are very advanced kludges.
- Humans are very good at adapting to their surroundings, from the arid deserts in Africa to the frozen tundra of the Arctic to the strange and awkward desktop metaphor (do people still use pencils?). We even lull ourselves into a feeling of comfort in these inhospitable environments — I think I read somewhere that upon smelling a foul stench, it only takes a person 7-8 minutes of constant exposure before s/he becomes acclimated to it and becomes unaware of the smell. No matter how terrible the interface is, it just has to be good enough. (See Windows95). I guess what I’m trying to say is: we’ve gotten complacent and comfortable with the keyboard and mouse as “the way we interact with computers.” I have to admire how Jobs and Apple are quietly picking us up and dropping us ever so gently to a new paradigm for computing: touch.
- I don’t have any data to back this up, but I’d be willing to bet somewhere in the research archives of Apple there are analyses of how much time an average user touches the keyboard, touches the mouse, uses the chrome, as well as which applications are most often used. My speculation is that the iPad finds its justification in many decisions in this research, which probably shows that we use about 20% of the available functionalities of most programs.
- Nowhere in the iPad presentation does it say “People won’t own desktop computers anymore.” The iPhone does not replace the MacBook. The MacBook does not replace the iMac. And the iPad does not replace anything. It fills the gap. So don’t jump to the conclusion, like many are doing, that multitouch will take over our desktop computing interfaces and we will mourn the passing of the keyboard and mouse. We’ll still have them where we need them (ie, on our desks).
- The iPad seems to be a better solution for a different context of computing: let’s call it constructive leisure. You’re traveling, on a plane. What do you see most people with computers doing? Reading, watching movies, catching up on email. Occasionally you’ll see somebody squinting over a spreadsheet or a Word doc. When I walk by those people in the aisle I usually think to myself, are they really being productive, or are they spending most of their time just twiddling the interface, trying to get at something? It looks like what Apple’s done with the iPad (and the native apps built for it) is gotten rid of most of the interface and mapped the most common functions to basic touch gestures.
- $499 is damn cheap. And I’d certainly pay more for a model with a camera. I’m sure, looking at how the iPod evolved over the years, there will certainly be fancier and more expensive models with videocameras and roomier hard drives. They just have to get people sold on the basic idea first.
- When I showed the nook to my wife, the first thing she did was paw at the screen with her fingers. That is to say, her first impression of the thing was disappointment that the screen wasn’t responsive to touch. And kids, that ever-brutal focus group, to paraphrase Clay Shirky, kids won’t be “looking for the mouse“, they’re going to be smudging the screen — every screen — with their grubby fingers. Apple has, like it or not, begun a shift in our expectations of user interfaces. Jobs & Co. kept repeating “It just works” in their presentation. That’s the highest bar of UI design and not many hardware or software companies can go around repeating that with a straight face to their customers.
- Interesting how a large tributary of discussion about Flash has begun to flow from the introduction of the iPad. (If you missed it, read this visual lament and Nick Bilton’s Why the iPad Web Demo Was Full of Holes.) It put (on a bigger screen) the growing movement that argues Flash is not “web-native” — ie, it’s proprietary and closed, which is an ironic argument for Apple to be making.
- I thought it was very telling that the last slide Steve Jobs showed in his presentation was this slide:
I think it makes a great point, especially to all the techie naysayers who think “it’s just an over-sized iPhone” or “I went to the Apple special event and all I got was this lousy picture frame.” It’s hard enough to make a chip scream or do multi-threading or cram a camera into a thin chassis. What’s really really hard is to make the product usable, for the masses. And you can’t solve the problem of usability with engineers and math PhDs. You still need, for lack of a better word, “artists”, right-brained people to imagine and think about the intangible aspects, the experience, the “magic”.
Gizmodo leaked a video yesterday showing Microsoft’s Courier, a dual panel touch screen (+ stylus) computer that is drawing obvious comparisons to the currently non-existent Apple iTablet. I mean there is an unhealthy amount of speculation going on trying to guess at what Apple’s next move will be and the Courier provides, if anything, a brief distraction from that. But it does raise some interesting UI points that Apple may already be considering. Not one screen, but two.
Perhaps the folks at Pioneer Labs (a team within Microsoft’s Entertainment and Devices unit) were peeking at recent Apple’s patent filing (deliberately dorky drawings which are half-wireframe, half-police sketch) which many have speculated represents their design for the iTablet:
Notice the dual screen concept. Just for perspective, this was their original patent filing for the iPhone, next to the actual thing.
So it is in this context that we must remember that Courier is just a concept piece wrapped in an enigma shrouded in vapor. There has been no release date, no evidence of production, and for all we know, it could cost $12,500 retail. (What, did somebody say Surface?)
In any case, what is noteworthy about Courier is that it makes a really good case for the dual screen environment as optimal for reading, and by extension, for e-books and e-readers.
Of course, we had to start somewhere, and right now the Kindle and the iPhone are, well, “somewhere”. The iPhone is admirably performing many tasks, one of which is displaying books and reading material, which it does somewhat grudgingly. I have two hesitations reading on my iPhone: I can’t really read while I’m fearing for my battery life, and the screen size is just too small to be luxurious. I want reading books and long articles to be a pleasure and not simply super-convenient. The single screen iPhone today as a reading platform just isn’t going to do to the book publishing industry the same thing the iPod did to the music industry.
And the Kindle, well, every time I look at it I think of my first Palm or my first iPod (both of which I still have): dull, monochromatic, and awfully low res.
We Are Not Cyclops
When it comes to screens, more, in this case, really is better. I recently upgraded my home office setup to include a second monitor, and by golly, the doubling of screen real estate, while not exactly doubling my productivity (studies show a 20-30% increase in productivity, whatever that means), sure gives me a lot more breathing room and I find myself spending less time moving windows out of the way and manipulating the furniture, and more time doing what I’m doing.
But it’s not just about sheer number of pixels at your disposal. I think there’s something psychologically useful about the actual separation of the total area into two regions, left and right, that allows us to take advantage of peripheral, ambient information. It’s almost like the second screen is a big margin that gives the main focal area a needed dose of breathing room.
Maneesh Agrwawala, a “Computer Vision Technologist” who just won a 2009 MacArthur Genius Grant, elaborated on the benefits of a dual screen approach in a 2008 paper (and video) called “Navigation Techniques for Dual-Display E-Book Readers”. (Credit on the paper also goes to Nicholas Chen, François Guimbretière, Morgan Dixon, Cassandra Lewis.) They made the observation that when we read (especially longer pieces) we tend to flip back to a previous page to refer back to a character or name or concept from before, and I think the visual memory of where that information was positioned on the visual field is helpful in recall. Which is why two opposing screens that are framed separately (rather than simply dividing one monolithic screen) makes cognitive sense to our modern brains, which have grown up in a culture of the codex. The “page” (or, as I like to call it, the “unit of reading”) that we just read is a convenient glance away if we need to refer back to it, instead of completely out of our field of vision in the ether somewhere requiring me to find some button or other to get it back.
But my biggest argument for a dual display e-reader is really an intimacy issue. Having two screens allows you to shield what you’re reading from the person sitting next to you, and helps bring the act of reading back to a private experience, where you can cuddle up and luxuriate with the words, just you and the author, and get lost in another world.
I know, I know, it’s taken me this long to post my video from PKNY7? Yes, the shoemaker’s children etc. Anyhow, this presentation and the strict format forced me to distill my ideas into a frustratingly succinct argument (which sidesteps the more interesting parts about the cognitive attention mechanism and information foraging talent of the brain). I’ll be posting the “Director’s Cut” version here at some point.
My presentation on the future of reading, long-form journalism and publishing (plus some screenshots of the Redub Reader) in 20 slides (20 seconds each slide) at Pecha Kucha NY, 9/14/09 at Solar1.
Thanks to Ayagwa for filming and editing!
This is the short version of a presentation on online magazines we’ve been working on here at Redub. It ends with a link to an in-development demo that features content from GOOD’s Transportation Issue 015. Casey Caplowe (GOOD’s Creative Director) generously gave us the InDesign files for the entire issue and we re-figured some of the content so it fit on the screen natively. We even had to re-imagine the Transparencies because they just didn’t work just throwing the original (for-print) image up on the screen (which is what most publishers do sadly) — since we didn’t have the high resolution of print, we took advantage of the screen’s native attributes, namely, animation. I’d even posit that what the screen lacks in dots per inch it more than makes up for in dots per inch per second.
There are still features we are hinting at but that we’re still working on adding, like annotation (which is the biggie). We’re laying in the sharing stuff now.
Oh, and as far as search engine optimization is concerned, we’re working on a solution for that. Right now all of the content is stored as XML in a database (modeled on WordPress). We just have to build a front-end for it that spiders can crawl all over.
And feedback is welcome!
I have been following the #iranelection hashtag on Twitter for the past two days, and I’ve been noticing a few things about online trust.
Two users in particular have surfaced out of the din of that particular stream (not sure whether naming them will expose them to further harm, so I will call them Pyramus and Thisbe) and watching their posts throughout the past 48 hours, and the true power of asymmetrical following proved itself as they told their stories in real time to me through my own Twitter screens (private through relative obscurity), and simultaneously to the rest of the world. If you didn’t know whom to trust, or didn’t choose to, they would be just another pair of green faces in the crowd.
But how did I come to trust these two voices in the midst of this utter chaos? (This question was posed to me by Ted on Facebook.) Trust is built out of so many intangible things, online and off, and many people trust and distrust for different reasons. I am a very trusting person to begin with. I used to say that everyone I meet starts with 100 points and over time, with each questionable or untrustworthy interaction I will deduct points. For me, it’s easy to lose points and hard to gain them back. Other people have an opposite philosophy in that they distrust everyone from the beginning, and over time, people will do things to earn trust points over time. Same mechanism, different directions.
The first and most critical step in determining trust online is establishing identity. But this particular case demonstrates how easy (and difficult) it is to determine if someone is who they say they are. Pyramus’ Twitter profile says she is an “Iranian Student”. That is an unverifiable piece of data, but coupled with the content of her posts over time, I believe her. Identity is who you say you are, and the way you say things. And when.
If you have ever seriously tried to date online, you will know this to be true. On the other hand, you will also know that another’s identity can be completely constructed in your own mind out of the same tiny clues, inflated by what you want to see.
But again, how did I come to trust Pyramus and Thisbe, out of all others, and decide to follow them as trusted sources reporting from ground zero? Not sure, but it was some combination of Farsi name-dropping, citation of landmarks that were later verified (IRIB for one), quality of reportage, and the underlying tone of posts. Oh, and not to be overlooked, sometimes misspellings and alternate translations of place names (“Valli Asr” for “Vali-e-Asr”). There are myriad other incalculable hints that I’m not even conscious of, but after listening to this person’s voice through these little messages long enough, you just begin to trust that they are genuinely who you think they are, they don’t have to ask you to trust them (they don’t have time for that), there is an urgency that comes through both the breathiness of Twitter and the odd little mis-typings that signal a human touch.
I can almost see them as night falls over Tehran, preparing for the ensuing battle, ready for the Baseej to come breaking through the glass and the doors, typing out messages to the rest of the world and posting them before the censors block their connections to the internet, making every single 140 character payload worth more than life itself…
Be safe, Pyramus and Thisbe.
This is my quick informal summary after having watched the #iranelection thread for the past day.
* Channel 4 (UK) footage of Basij Islamist Militia shooting into crowd and killing at least 1 (up to 7 casualties have been reported).
* @persiankiwi (not sure of gender) reports that more than 100 students missing from Tehran University dorms, reports of several dead from last night.
* @change_for_iran (http://twitter.com/change_for_iran) is also riveting.
* Twitter is going down for maintenance for 90 minutes tonight. Many on the thread are pleading for them (the powers that be at Twitter) to hold off, showing just how essential this service is at this moment.
* Incredible photos here: http://www.boston.com/bigpicture/2009/06/irans_disputed_election.html
* The Iranian Government are obviously locking down all media which might facilitate dissent and incite more rebellion. All mobile phones are down. They are blocking most major media outlets and websites, and there is a cyberwar going on as well. Some are trying a DoS (Denial of Service) attack on Iranian Government websites.
As part of their blockage of internet sites, they are blocking the range of IP addresses that originate from Iran. This effectively shuts out people posting from the scene using Twitter and other micro-blogging services. Some Iranian Twitterers on the ground are requesting that proxies be set up by any on the outside for use which will allow them to continue to use the internet to communicate what is going on to the rest of the world.
Wikipedia: FreeGate is software that enables internet users from mainland China, Iran, Syria, Tunisia, Turkey and UAE, among others, to view websites blocked by their governments
* Here is one tweet from @Change_for_Iran:
“using freegate now, nothing else working. no power in most of the buildings & cellphones & land lines are out again. #iranelection”
* There is an interesting dilemma on the thread because some people (@stephenfry) have been answering this plea by posting the IP addresses for new proxies on #iranelection, and others are saying not to post proxies on Twitter because they are being exposed and shut down.
“If the news is important, it will find you.”
Web Design Booth has a rundown of 15 Extremely Useful Grid Generators, and collaborator, co-conspiritor and partner, Netprotozo’s Grid Generator comes in at #4 (though it’s not clear if they are ranked in order of usefulness)! Rock on, Karl! Encore!!
(IMHO, we’ve tried many of these grid generators, and while they all have excellent qualities, the Netprotozo grid generator has many intrinsic advantages, namely the flexibility and robustness that comes from having been empoyed in many real-world projects. Karl’s really done a great job of incorporating some critical elements which allow things like inter-column padding, and an underlying base unit which is an incredibly powerful concept not present in many other CSS grid systems.)
So I tried. My little experiment in trying to tame my attention deficit by limiting the number of tabs I would allow open at one time — FAIL. I suppose it was doomed to failure from the outset, but I learned a few things along the way about attention and how we browse:
I used to doubt the hype about Twitter, until last week. When Alex and Adam posted a tweet (at 5am no less) looking for a presentation whiz to visualize balance sheets and the economic situation, I almost fell out of my chair, because as it happened, when they made their amazing episode on This American Life back in February called “Bad Bank”, I had started doodling (first on paper, then moved to the computer) as they were talking in an effort to try to understand what was going on visually. OK, I admit, I listened to it about 5 times, and eventually I ended up with a series of slides that I posted to them last week (I think it was 7am) via Twitter.
Turns out they were making a live presentation in LA a week later, and with Ryan Lauer, who was also a huge fan of the show (we listen to it in the office), we expanded it into a longer slideshow in Keynote ’09, which, by the way, kicks serious ass once you get to know how to use it. Anyways, I cannot tell you how amazing this experience was working with them and how much fun we had working on this. They are my heroes.
Plus, I’ve gotten an invaluable crash course in economics and I can’t wait to do more with them. They actually make this incredibly complex and crucial stuff understandable in human terms, which is exactly what Redub does with creative visualizations of data.
UPDATE: Entire webcast embedded above! Link from Planet Money’s blog.
PS — if you’re wondering what “WTFJHTOE” stands for, check the webcast
If you’re an information architect or user experience designer, or even if you’re not, you’ve probably heard the “Rule of Seven” axiom. That is, Seven (plus or minus 2) is the magical number of things your brain can comfortably hold in working memory before it freaks out and either shuts down or needs help. Call it “channel capacity” or “user-friendliness”(why does that term seem so antiquated?), call it what you will. Information architects know that chunking things into seven or less items or categories in a navigation bar is just a good, humane thing to do. It has been posited that a tightly-knit group of seven people is an optimal community size, because above that number communication tends to break down and not everyone interacts naturally with each other and cliques begin forming. Seven digit phone numbers, seven days of the week, seven wonders of the world, the seven seas, the seven deadly sins, the Magnificent Seven…the list goes on and on if you want to look for it. You can speculate as to why there is this natural limit on our perceptual machinery (my tongue-in-cheek hypothesis is that it’s the average of the number of fingers on one hand and the total number of fingers) but whatever the real reason, I accept it as a nice and useful constraint.
Recently, I started thinking about applying the Rule of Sevens (plus or minus two) to my own version of “Getting Things Done”. You see, I am a tab-slut.
If you walked by my monitor at any point in the day (or night) you would probably be astounded at the sheer number of tabs I have open at one time in my browser. On average I’d say I have at least 20 to 30 tabs open. And one day I asked myself, Why? Why does each and every one of these different websites need to be open? Is this a symptom of ADD? Or am I just lazy? I mean, you could say the same thing when you see the stack of dirty dishes in my sink (though I’m not as bad about that).
So as an experiment in productivity, I decided to impose the following rule on my browsing:
Thou shalt not have more than 7 browser tabs open at any given time.
Of course this also implies that Thou shalt not have multiple browser windows open (if you can help it).
I welcome anyone else to try this experiment with me and share your discoveries. I promise to post my thoughts at the end of today, because after tomorrow, I will leaving for my honeymoon, where I have decided to take things a step further and go completely off the grid. Wish me luck! (I’m gonna need it! Bad!)
Stanza‘s great. So’s Instapaper and the Kindle iPhone app. But let’s be honest here. If I look at my real app usage (this is my own personal reckoning, since I don’t have RescueTime or Google Trends for my iPhone) here’s my top 5 in terms of actual usage:
One game, a social networking app, email, microblogging, and the news. Do you see an actual reading app here anywhere?
But what about the news, you ask? That’s reading, no?
No. Well, let’s be more specific. It’s short reading, browsing, scanning. News stories are generally around 600 words or less. Anything longer and I’m going to be worrying about my battery life or waiting to get to my computer. I’m going to generalize here and say that my app usage is for short, bite-sized activities. Small, just like the iPhone’s screen.
Now, I’m sure there are people out there who actually do slog through long reads on their iPhones (using the aforementioned apps). For some, I’m sure it’s a point of nerdy pride (“Look! I can read a free sci-fi eBook on my handheld device!”) and for others it is an occasional convenience (“Bored. Stuck here without any reading material. Oh yeah, I can use my iPhone to read that article I saved to instapaper 3 weeks ago!”).
But let’s be honest: reading on the iPhone is sub-optimal at best.
Why? Because reading, the long, focused trance of real reading is, and should be, a pleasure, not a convenience. To be able to sink into a well-wrought text requires an environment relatively free of distraction — and that includes the reading surface itself — because following complex thoughts and detailed verbal description is like walking a tightrope. Any little lapse in concentration — an inconsistent scrolling of the text, finding the pagination, targeting the next page button, waiting more than a second for it to load, an accidental tap or swipe that jogs the interface, a new message — breaks the spell, and the words go back to being mere words and the world your imagination has been constructing burns away like a fog.
It’s the difference between watching a movie on YouTube versus going into a dark theater with comfortable seats, immense screen, and surround sound. People will continue to pay (the price of a paperback) for that experience, just as they will continue to pay for well-set, well-edited books on good paper.
That which Facebook calls “like” by any other name would be called “disgust”, or “approval”. Ay, there’s the rub. (Apologies to dear Bill.)
Ever since Facebook started pushing activity of people/organizations you are “fans” of into your news feed, it has become clear that their nomenclature for certain events/actions needs some work. Just as “friending” is a meh blanket term for a bi-directional affinity relationship, “becoming a fan of” is an expression of uni-directional affinity, what we need is some kind uni-directional gesture of recognition or attention. Maybe it’s as simple as “I am paying attention to this” or “I have paid some attention to this”. The question is whether we need a multi-faceted metric here, because you can pay attention because you think it’s cool, witty, funny, or smart, or something can catch your eye because it’s horrific, crazy, sad, or sick.
At the end of the day, I am pretty sure Facebook’s intention behind introducing this feature (rather hastily) is because it wants some kind of simple way to measure influence (ie, how many people [insert term here] your stuff/thoughts/updates) or attention.
I happened to pick up a complete issue of the New York Times paper edition yesterday and I had a strange, disconcerting experience. I suppose you could call it déjà vu, but I think it’s slightly different, slightly more explicable than that…
I had given up my daily subscription to the Times two years ago, subsisting now as a “Weekender” and the truth is, I am paying $3.45 a week for the New York Times Magazine, since that’s the only section I really read. The rest, as they say, is “fish wrap.”
All other days, and even weekends, therefore, my daily experience with the Times is through its superb digital online product. So there I was, waiting in the hallway of my office, waiting for Ryan to come in since I had left my keys inside in my rush to leave the day before, and, bored, I picked up the newspaper someone had left for recycling, fully intact. After scanning the front page for a second, I realized that I had seen each of these headlines the day before online.
I hadn’t read each article, of course, but as I flipped further, I thought to myself, “So that’s where they put that article, and oh, I didn’t realize that one got the entire front page of the business section!” It was like someone had come in and re-arranged all of the furniture in my apartment, with different priorities and a different sense of order.
And one of the beauties of this post-digital encounter was that I stumbled on a fascinating article which hadn’t been on the “most e-mailed” list and it was a blip in the parade of articles on the homepage that day. But there it was, front and center on the business section:
Google, the online giant, had been sued in federal court by a large group of authors and publishers who claimed that its plan to scan all the books in the world violated their copyrights.
As part of the class-action settlement, Google will pay $125 million to create a system under which customers will be charged for reading a copyrighted book, with the copyright holder and Google both taking percentages; copyright holders will also receive a flat fee for the initial scanning, and can opt out of the whole system if they wish.
But first they must be found.
The article was about Google’s campaign to satisfy the terms of this class-action settlement, payback, if you will, for attempting to scan and offer digitally every book in the universe, to compensate the authors and copyright holders for this use of their “property”. The irony was that, in order to achieve this, Google was taking out half page ads in newspapers all over the world, an undertaking only Google could pull off.
Fancy, that: Google having to use paper to distribute information.
It just goes to show: print is going to recalibrate itself from what it used to do (everything from phone books to news to long texts to novels) to focus on what it does really well in a digital, networked world (not hyper-fresh news, not phone books, on-demand magazines and books, and information distribution off the grid).
I downloaded the Kindle iPhone app today after reading about it in the Times, and I took it for a quick spin. Here’s the title screen:
- I synced it with my Kindle2 and it took me to the “last read” section of the book I was reading. Now this is going to make us really have to re-think the act of reading itself.
- Swipe to turn pages — better than scrolling
- Nice to see even a touch of color (in the hyperlinks)
- Still insists on justifying the text. I’m sure there’s some technical reason for this, but I would love to see how it reads ragged.
- Here’s where I’d really *love* to have text-to-speech
- How do I put an eBook (EPUB) on this thing?
- Shouldn’t the Kindle iPhone app allow my to buy stuff inside the app?*
* When Hamilton (who works at the Times) complained that he was having trouble buying a newspaper with the app, I went and tried it myself, shrugging off the weird implications (buying a “paper” to read on your iPhone??). It seemed so roundabout, going to the website to buy today’s
paper edition of the news to read on my Kindle when the actual website for the Times or WSJ or whatever organization is a few keypokes away. The tedium of those extra http requests is certainly not worth the reading experience of the Kindle iPhone app. Anyways, it didn’t work for me either: I went to Safari, logged into Amazon, and bought a copy of today’s Wall Street Journal (for $.75) and when I synced my Kindle app, it wasn’t there. Boo hiss.
Stanza should be quaking slightly in its boots, though the closed-ness of the Kindle app really damages its networthiness, or at least, my unbridled full-bodied embrace of it, and the Kindle for that matter. The question is whether having fanboys is better than having general consumers fork over actual money remains to be seen.
OK, I’ve spent a total of 48 hours with the new Kindle, and here are my observations:
- Definitely need to get some sort of sleeve or cover for it. I have visions of opening up my bag to find a broken screen and a grey goo situation.
- Yes, it looks somebody ran over an iPod and flattened it out. That said, a flattened out iPod-like Kindle2 is preferable to the original Kindle model. It’s like Star Trek v. Star Trek: The Next Generation. Sleeker, better special effects.
- And speaking of TNG, Will “The Ensign” Wheaton has a great little post about the whole speech-to-text debate. To recap, the Author’s Guild’s panties are all up in a bunch because Kindle upgrade now reads any text using speech-to-text software (not sure if IBM made the software but it’s pretty good) saying they are taking away the livelihood of people who make their living from audiobooks. Personally I feel this argument is so tired now (see horse-buggy manufacturers and telegraph operators) and so do some of the more forward-thinking authors like Neil Gaiman and Cory Doctorow. Anyhow, Will Wheaton asks “What if we’re wrong?” and does a side-by-side comparison reading of his own book, Sunken Treasure, and the Kindle2 auto-reader. It proves his point pretty well. I’m waiting for the time when authors are up in arms because the computer reads better than a human. Then again, Deep Blue didn’t kill chess’ popularity.
- My daily interaction with the K2 is: as I’m in transit to the subway, I plug my iPhone earbuds into the jack and fumble around to get the reading started, stick it into my bag, and let it run. I like to keep the reading speed set to slow because otherwise it starts to skimp on the pauses between certain words, basically rendering the experience incomprehensible. (You’d think Amazon would fix the software so it didn’t read its parent company as “amazon point com”.) Also you have to jack the volume way up if you’re outside or in the subway unless you have some noise-cancelling headphones. So I would say if you’re listening to a novel the comprehension level will be sub-optimal, around 75-80%. It’s like when you’re reading a page and you realize you were just reading the words but not actually getting the meaning completely after a certain point and you just zone out. Like I’ve said before, reading is a delicate process.
- Where I wish it’s more like an iPod is externalized controls over the reading such that I don’t have to turn the damn thing on in order to stop the reading of the text. This is a key user interaction principle that is often overlooked: don’t make your user look like an idiot. Or rather, enable your user to achieve effortless grace. I love the iPhone’s double-click functionality on the home button which brings up the volume and play/pause control on top of the screen lock, so you can pause or change the volume almost effortlessly and then go back to what you were doing before. It’s a subtle but incredibly powerful and thoughtful piece of user interaction design.
- It is neat that the screen follows along with the reading of the text.
- They added a USB 2.0 micro port — great, can’t use my old power supply, though that was a mistake anyways. This is what it should have been from the beginning. Much better.
- The screen is still the same, but it uses the Epson Broadsheet controller which gives slightly faster refreshes by dividing the screen into 16 pixel sets and updating them in parallel. Still doesn’t solve the “black flash” problem though, which is more a function of the eInk technology itself.
The “Locations” or Progress Indicator is pretty much the same (it now shows bookmarks and chapters etc). I know from designing the Redub Reader, that this is not an easy thing to design, what with users coming from a book which never really needed a “where am I?” indicator. The problem for eReaders is, if you can resize the text, how do you define what “page” number you’re on? I just don’t like the word-choice.
Yes, this is my second Kindle.
I like the redesigned home screen far better than the original. The thick black rule indicates which selection you’re on and the subtle dots show where you are in the progress of your book or article. I was puzzled at first by the two arrows to either side of the bar, but they turn out to be some common/useful functionality they moved up from the menu (which is always a chore to use). Left (on the 5-way controller) allows you to remove the item, and right brings up a whole screen of options for that selection, like description, go to the last page read, etc. It’s just a little cleaner and more usable.
All in all, I’d give the K2 a thumbs up. The pricepoint is still too high, but factoring in the on-board text-to-speech software (which I’m sure isn’t cheap) and the more elegant design, I’d give it a B.
Well, after a flurry of slapdash HTML/CSSery done while watching the Oscars, I’ve put up the new public site for the Redub Reader. It’s at http://reader.redub.org. Totally coded by hand, though I’m sure I’ll port it over to WordPress at some point, but sometimes it’s just easier to maintain good ol’ fashioned HTML pages with TextMate.
I’m sure someone else is doing this as well. I just couldn’t find anything out there. There’s lots of artifacts from the ridiculous Word-to-scanner-to-PDF conversion that render it unreadable. This tiny bit of typesetting took me the good part of my Saturday morning. Anyone else out there want to help out?
The American Recovery and Reinvestment Act of 2009 (the Stimulus Bill, 1/15/09)
Beginning – SEC. 1111. EMERGENCY DESIGNATION
I just wanted to give you loyal Redub readers who follow my indulgent musings here on the blog (yes, all 3 of you) a little sneak peek into the in-house project that we’ve been working on for the past few months.
Thanks to the amazing skills of Ryan Lauer (with crucial assistance and input from Karl here in the office) we’re getting to a point with this project that we’d like to start getting some input from actual users.
Instead of giving you a long spiel about what it is, I offer you this dorky little video as a trailer:
You can sign up at the URL in the video, or you can just go directly to the development URL:
A couple of things:
- Hint: Don’t tell anybody…but try just importing any NYTimes URL instead of copy and pasting. Works best for long NYTimes Magazine pieces that really don’t translate well into their regular news article templates.
In my earlier post about the Kindle, I wondered why Niall Ferguson’s new book cost so damn much. $30 for a bunch of paper, ink, and cardboard? This, and the current recession, got me to thinking about the value of information and its physical manifestations.
Why books cost so much
I did a little research and I found an old Salon piece from 2002 by Christopher Dreher that ventures forth a good breakdown:
Why do books cost so much? Consumers are often baffled at the price tag attached to what appears to be little more than a mass of paper, cardboard and ink. A whole host of factors, including the size of the book, the quality of paper, the quantity of books printed, whether it contains illustrations, what sort of deal the publisher can make with the printer and the cost of warehouse space, all affect the production costs of a book. But, roughly speaking, only about 20 percent of a publisher’s budget for each book pays for paper, printing and binding, the trinity that determines the physical cost.
The rest of what you shell out for, say, the new Donna Tartt novel pays for the publisher’s overhead (the cost of maintaining a staff of editors, proofreaders, book designers, publicists, sales representatives and so on), and for the cuts taken by distributors (who run warehouses that supply books to retailers) and booksellers. Promoting the book is another expense: printing up catalogs presenting each season’s titles to booksellers and the media, purchasing ads, mailing out hundreds of review copies to critics and sending the author (if he or she is lucky) on a book tour. So are shipping fees and the storage costs on unsold copies.
Why food in restaurants costs so much
This is a big aside, but I am reminded of my days running the back of the house for a restaurant, when I had to do food costs and pricing on every item in the menu. This involved standing on the line with a digital scale and as the cook was firing each dish, I would stop him, weigh each piece of meat or veg he was using down to the scallion, and hand it back to him so he could finish the dish. (I know, I know, you’re supposed to do all this before you launch. Hey, it was my first time! Gimme a break!) Later I made a spreadsheet of all the bulk prices we paid each vendor from our order sheets and calculated the actual food cost of each dish. It was painstaking work but here’s what I learned from that experience:
If you want to know how much an item on the menu costs in most mid-range restaurants (1 to 2 stars), take the price on the menu and divide by 3. That’s the cost of the raw materials it takes to make the item. There’s some debate in the restaurant industry as to whether labor factors in, but in most New York restaurants that cost is usually deemed negligible (you can guess why). If restaurants had to pay living wages and benefits to their back of the house staff, you would probably see those prices go way up, which is why fancy restaurants cost so much. The price of a steak has an upper bound. It’s the overhead for everything else (ambience, Daniel Boulud’s salary, etc.) that you’re paying for.
Why CDs cost so much
Pennies That Add Up to $16.98: Why CD’s Cost So Much
A Neil Strauss piece back in July 5, 1995 for The New York Times (before he became a pick-up-artist and wrote The Game) points out that (back in 1995) a CD, say, Rod Stewart’s “A Spanner in the Works,” cost “more than 100 times the cost of the materials used to manufacture it”. (Incidentally, 14 years later it’s selling for $8.99 on Amazon.)
Setting prices “is very arbitrary,” said a top executive at a major label, who described his company’s pricing policies only on condition of anonymity.”We’re trying to raise CD prices,” he said. “The reason for this is that our costs are escalating in such a marginal way, everything from marketing to promoting to signing bands. It costs $400,000 to $600,000 to sign a band. The first video costs a minimum of $50,000. Touring is more expensive, and people’s salaries are a lot higher. Our profit margins are being squeezed.”
In a word: price-fixing.
Also here’s a really good Fresh Air interview with Rolling Stone contributing editor, Steve Knopper, talking about the rise and fall of the record industry.
Why does a newspaper cost so much?
Yesterday, the Silicon Alley Insider did a great little back-of-the-envelope thought experiment to figure out first around how much it costs the New York Times each year to produce the paper and distribute it (including cost of printing, salary and benefits to all the writers and employees) and concluded it would be cheaper to give each of their subscribers a Kindle (current market value, a whopping $359)! (Think of all the trees!)
According to the Times’s Q308 10-Q, the company spends $63 million per quarter on raw materials and $148 million on wages and benefits. We’ve heard the wages and benefits for just the newsroom are about $200 million per year.
After multiplying the quarterly costs by four and subtracting that $200 million out, a rough estimate for the Times’s delivery costs would be $644 million per year.
The Kindle retails for $359. In a recent open letter, Times spokesperson Catherine Mathis wrote: “We have 830,000 loyal readers who have subscribed to The New York Times for more than two years.” Multiply those numbers together and you get $297 million — a little less than half as much as $644 million.
I predict that one day in the near future, a large media company (if not the Times) will come to this conclusion, and the eBook reader market will explode. Either that, or they will come to the realization that most people have a computer (or at least access to one) and a cell phone already (plus people won’t really do pass-along with Kindles or eBook readers) and decide to just go completely digital.
Videocassettes DVDs cost so much?
Do we really need to go over this? I quote from another post in 1995, this time from Nicholas Negroponte, who really started me off thinking about the distinction between atoms and bits with his seminal book, Being Digital:
During a speech I gave at a recent meeting of shopping center owners, I tried to explain that a company’s move into the digital future would be at a speed proportionate to the conversion of its atoms to bits. I used videocassette rental as an example, since these atoms could become bits very easily.
It happened that Wayne Huizenga, Blockbuster’s former chairman, was the lunch speaker. He defended his stock by saying, “Professor Negroponte is wrong.” His argument was based largely on the fact that pay-per-view TV has not worked because it commands such a small piece of the market. By contrast, Blockbuster can pull Hollywood around by the nose, because video stores provide 50 percent of Hollywood’s revenues and 60 percent of its profits.
And 14 years later, we all know who was eventually proven right. Can you say Netflix?
This recession is bringing the digital revolution and all of its myriad, economy-altering implications to a head, accelerating the inevitable. This is no time for nostalgia. Do you know how to publish and remix your content to multiple channels and extract full value from this networked, information economy?
This is REDUB.
Aya and I were watching the trailer for We Live in Public on Sunday and there was a line that said something to the effect of “blah blah mumble being online all the time mumble mumble like an addiction, it’s like Attention Deficit Disorder blah blah” at which point Aya shot me an accusing glance, in a kind of non-verbal intervention.
Okay, I admit it (that’s the first step towards recovery, right?). I have a problem. I am online most of my waking hours (see my self-analysis). Rarely do my computers ever get switched off (I just sleep them). I can argue that it’s my livelihood. I can say I’m trying to be one of Malcolm Gladwell’s Outliers and that I have to amass 10,000 hours of, um, practice so I can be an “expert” on teh internets.
But the truth is, I like the feeling of knowing what’s up with my network, and the rest of the world. I am more aware than I used to be. I care about politics because I am more engaged. I can blame part of it on genetics. Growing up, my brother and I would rarely be without a book. I used to carry a huge backpack filled with books wherever I went — in fact, I would feel naked without the weight around my shoulders. My brother ate sci-fi pulp novels for breakfast. (He is actually a freakishly speedy reader, eating entire pages in a glance.) My dad would spend hours sitting on the toilet reading scientific journals (xeroxed from the library).
The point I tried to make is that the only thing that’s changed is that we’ve shifted the same activity from “atoms to bits” (as Nicholas Negroponte likes to put it). No more 50lb backpacks; just a 4lb laptop. Instead of reams of paper, which are now gathering dust in a box taking up space in the basement, I now have Evernote, del.icio.us, and Google Reader that live in the airy Cloud.
The thing we can’t seem to get over is this: when it’s on paper, it’s okay. But when it hits the screen, somehow it becomes problematic, stigmatized, it’s an “addiction.”
One way of looking at it is that we have gotten lulled into the idea that if something made it into print, it had to be knowledge. But we now know this is not the case. We’re all in a jumble right now. The computer is the locus of too many activities: work, play, banking, browsing, rubber-necking at the train-wreck of humanity, study, creativity, etc. They are all crammed together and flattened out such that the bad taints the good (never the opposite).
Two designers in London have printed Things Our Friends Have Written On The Internet 2008, which is a publication of “stuff from the internet…printed in a newspaper format”
Another way of seeing it is from a very physical reality. For all its atomic encumbrances, the book is portable, and computers, surprisingly less so, though that all is changing. I am seeing more ordinary people whip open their laptops on the subway, more people reading on their phones, and a new wave of netbooks is hitting the streets. The screen forces us to come to it. It emanates information, and it is information of an altogether new and different quality because it is born on a screen and is meant to live on a screen, never to be frozen in print, and we are entranced by its flickering aura.
Karl (netProtozo) and I have been working together for almost 8 years off and on, and now we share an office, where we consult and work on projects together.
Karl whipped this little grid generator up today. It rocks for several reasons. The first is it’s the result of our having worked on large-scale sites and our approach is reflected in it, like the idea that there should be an underlying fractional unit for the columns, gutters, and super-columns. The other feature is invisible, but it hooks neatly into the HTML framework we’ve been developing (and the project that is codenamed Morpheus). This approach has been used on GOOD and Conductor’s beta application.
Nothing excites me more than getting my paws on something I really really want to read — a new graphic novel, the latest New York Times Magazine, or perhaps the book I’m currently working on (reading, that is). Maybe it’s the english major in me. I am one of those people who finds it hard just to sit down and eat breakfast without reading something, even the side of a cereal box if there’s nothing else.
Also my destination affects this state of excitement considerably. Perhaps it’s this sense of readerly anticipation that makes me look forward to train rides so much (long plane rides less so). Locked in a train car, I know I will be a willing captive to my reading material, like Ulysses roping himself to the mast so he can hear the aching beauty of the sirens without going mad and throwing himself overboard.
My sirens are the multitude of RSS feeds I’m subscribed to, every item an irresistible maiden of interestingness. I am a creature of distraction. (And I know I’m not alone.)
The age we’re in right now isn’t helping much. I check my precious feeds on my iPhone if I’m not on my laptop or sitting in front of my computer at work. My Google Reader Trends are damning:
I redrew the graph to highlight a couple of things:
I threw a gradient in the background to show approximate daylight versus items coming into my RSS feeds.
Josh (i2pi) happened to be in the office and remarked that he hates seeing the (1000+) indicator in his Google Reader (ie, that a feed group has more than 1000 unread items) because he enjoys the sense of accomplishment of processing through a whole “stack” of items, to see it reduce from 1000 down to zero, what I realized later is called “inbox zero” (from Merlin Mann’s really interesting talk originally about managing email clutter).
What I got from this little bit of self-reflection is:
1. I cast a pretty wide net.
I am subscribed to something like 47 RSS feeds, many of which each yield thousands of posts a day (Digg, reddit). If you’re curious, here’s the public page of my “blogosphere” RSS feed. I occassionally will unsubscribe to ones when they begin to feel spammy, but in general, I like to fly at 20,000 feet, scan the headlines, then zoom down when I see something that catches my eye. Or to quote Clay Shirky once again: “there is no such thing as information overload, there’s only filter failure.” That said, my filter could probably use some tweaking.
2. I check my feeds. A lot. Maybe too much.
Just to satisfy my curiosity, I took the “Items Read” from my Google Trends graph and amplified the range to get a better view:
This represents the actual volume of items I bothered to click on and read through. In online advertising-speak, my clickthrough rate on my blogosphere RSS feed is around 10-12%.
If you suppose that, on average, a blog post is around 500 words, and I read 839 items in the past 30 days, that means I’ve read around 419,500 words in a month. If you then suppose that, on average, a novel is around 50,000 words, then I’ve read the equivalent of 8.4 novels this month!
Who says people don’t read anymore? (We just aren’t necessarily reading books all the time…)
Gotta run. The Reader calls, but out of curiosity, how do you read?
The (physical) newspaper* as we know it is being chopped up before our very eyes.
Yesterday, the New York Times announced that it would be selling 13% of its front page real estate for advertising.
Back in August of 2007, they sliced off 1.5″ to save on printing and paper costs.
The page is dead, or dying, indeed.
Clay Shirky provides some excellent perspective:
…People will always be interested in information relevant to their current situation. The part of that that’s really hard journalism, like covering the city council or whatever, where it’s long and it’s boring but you got to do it, is going to increasingly have to find new business models, because we can’t just rely on Bloomingdale’s to subsidize that anymore with display ads. And so we’re going to have this move to what I think are going to be a lot more nonprofit models for news, a la NPR. But, much more importantly, the idea that there are news organizations and other kinds of organizations, I think, is just going to break down under the weight of the evidence.
* For more on the future of journalism itself, Cliff Huang and Atley Kasky over on the GOOD.is blog have an excellent take: “A Glimpse at the Future of Journalism.”
I bought the Amazon Kindle right when it came out in late 2007. It’s gotten an increasing amount of press since then, culminating in Oprah’s gushing endorsement of it on October 24, 2008. (The NYTimes recently wrote a piece about e-books which attributed their rise in interest, in part, to the sales of the Kindle.) Since Amazon does not release sales numbers for whatever reason (perhaps because they are such a miniscule part of their business) analysts are estimating that there are somewhere between 250,000 and 500,000 in circulation at the end of 2008. Anecdotally, I’ve been noticing it more and more in airplanes and airports, and I’ve been hearing reports from random friends that their parents swear by them now.
To be honest, I marveled at the thing when I first got it, mostly because of the Whispernet feature which allows you to download a book on the fly, say, on your way to the subway, rather than stopping by a bookstore or library, (just what I need — more encouragement not to engage in planning ahead) but the visual design of the thing was wholly disappointing, and thus, it began to grow dust, languishing unused next to my bedside reading table. The Kindle had somehow failed to capture the simple aesthetic pleasure of reading.
Jeff Bezos, in his interview with Charlie Rose about the Kindle, remarked that his team’s number one design objective in the Kindle was to achieve the “flow state” of reading — that is, the ability of the physical object of the book, the paper, the ink, the binding, to disappear when the reader enters the world created by the author’s words.
I am certain it’s easier to get into this “flow state” when you’ve got something in front of you that you really, truly want to read. And on this score, Kindle (and Amazon) should have things pretty much locked up (literally) in their almost infinite catalog of selections from the major publishers. Granted, this probably took a ton of negotiating on the part of Amazon with all of the major publishers for distribution rights, but when you’re Amazon, I’m sure you can pretty much walk into the room with a baseball bat and say, “We’re doing this. You all on board? Great. Sign here.”
Note: There are some technical limitations that are endemic to all ebook readers that use E Ink technology (or at least that I am attributing to limitations in E Ink) that I won’t discuss here, like the supremely annoying black screen when you change pages and the menu and windowing UI (though the new the new Sony PRS-700 has a touch screen interface which is much more elegant).
First, the good parts
Over the holidays, I was in the Charlotte airport, staring down an hour delay in my flight, and I walked by a bookseller where I noticed Niall Ferguson’s new book, The Ascent of Money, which I had forgotten was at the top of my list of books to read over the holidays. I opened the cover and balked when I saw that it was selling for $29.95. Somewhat in price shock, I marched back to my luggage, took out my Kindle, and I downloaded a sample chapter. After reading a few clicks, I was hooked and determined to have it, especially after seeing the Kindle price: $9.99.
$29.95 – $9.99 = $19.96.
Which begs the question: What’s that extra $19.96 paying for? In addition to the paper, the ink and printing, the cover material, the binding, and bailing out a floundering publishing industry, I realized that much of it goes into something that may or may not break your flow state: good typography.
Whither Good Typography?
If the web is 95% typography, then e-books are somewhere in the range of 98%. And in my wide and varied research, I think I can safely say that the reason reading long texts on screens hurts so much is that there are very few people who can set type properly anymore (that, and annoying banner ads and vertical scrolling, but we will address these problems in another post). Unfortunately, this is the case with the Kindle as well. The font they’ve chosen for all body text is Caecelia, drawn by Peter Matthias Noordzij. It’s a smart choice, since it’s an Egyptian (slab serif), so you get the advantage of serifs without having to worry about the slope of the foot getting killed at smaller sizes, but the way it’s treated in the Kindle is, well, unfortunate.
Basic component of HTML rendered rather thoughtlessly by the Kindle:
The resolution of E Ink technology is purportedly around 300 dpi. In practice, or at least the way the Kindle renders images, it reminds one of the early days of the Palm, or of mezzotint. For instance, this graph below probably has some important labels, but no matter how hard I squint I can’t make out the text. I wonder if this were printed at 300 dpi on a laserprinter if it would be legible. I am sure the crappiness of the image quality is due to the fact that with E Ink you have only black or white “pixel” molecules with which to render text or image, and so it doesn’t matter if you have 300 dpi, you still need some levels of grey in order to do proper anti-aliasing and image reproduction. (I bet the image format on the Kindle is BMP.)
You would think someone on the Kindle team would have been able to spend a little time to create a style for caption text to differentiate them from the body copy. (The same is true of block quotes — no differentiating style.) I don’t know if this particular book was rushed through without any styling or what, but in the immortal words of Duke Leto (in the David Lynch version of Dune), “Really damn sloppy!”
It looks like by default, the Kindle likes to justify its pages of text. This gives you an even rag on the right side instead of a ragged, irregular one. The pros and cons of this can be debated. There are two variables that need to be adjusted for justifying wholesale large swaths of text:
- Font size
Without control of these two factors you will certainly have rivers, ie, channels of whitespace running down the paragraphs since whitespace, or more accurately, word spacing, is what is used to justify the lines. Unfortunately, font size can be controlled by the user on the Kindle, so whenever you decide to change the font size, the word spacing changes, and if you don’t have a hyphenation library (which it appears Kindle doesn’t have on board yet) and you get a diluvian horrorshow:
So, what happened to the text on the way to the Kindle?
One way to look at these typographic failures is to see them as byproducts of digitization, or to use my favorite analogy, this is what happens when you force atoms into the digital blender. Unfortunately, this is fraught with messiness (as clearly evidenced above) and it’s not clear who is responsible for the cleaning up of the digitizing mess. According to the Newsweek article:
Though Bezos won’t get terribly specific, Amazon itself is also involved in scanning books, many of which it captured as part of its groundbreaking Search Inside the Book program. But most are done by the publishers themselves, at a cost of about $200 for each book converted to digital.
Really? I highly doubt that scanning is part of the process of getting a book on the Kindle. I am pretty sure that most books nowadays begin on the computer (typed by the author on a word processor), then they are laid out by a designer on a computer, so that there is no need for them to make the round trip to print and then back again through a scanner.
Here’s what I think happens: they take the InDesign (or Quark ) file used for the book, export it as XML, and add Kindle-specific markup (this is an image, this is a caption, this is a list, and so on) to turn it into the proprietary AZW format. The semantic structure of books isn’t that complicated. It’s getting them to render nicely at all page-widths, font-sizes, etc. that’s hard.
From a purely visual, typography standpoint, I’d give the Kindle a C+. Good effort, but poor attention to detail. Fortunately many of these details just need some care and adjustment and are not necessarily the result of technical failures, just laziness and poor design judgement.
Next, I’m going to check out the new Sony PRS-700, which has a touch screen and highlighting ability. Stay tuned!
Also, don’t forget to nominate your favorite online read of 2008 here.
What is a wireframe?
Depends on who you ask. I like to define a wireframe as equivalent to a keyframe in a storyboard. (The problem is that most websites are not linear, but that’s a topic for another post.) I’ve done thousands of wireframes in my career as an Information Architect/Interaction Designer (have we settled on an appropriate title yet? I guess not). I don’t live for wireframing, even though making them is one of my most marketable skills. I have developed a very pragmatic view of it, having seen how my wireframe “bibles” are clung to by business customers, salespeople and QA, and then unceremoniously forgotten after the first beta launch. The tricky thing about wireframes is that they can mean different things to different people, depending on what stage of the project you’re in.
So now I have developed a Zen-like detachment with wireframes: they are ephemeral. In fact, I make it a habit of suggesting to clients that we schedule into the project plan a wireframe bonfire, usually sometime after launch. Every time, I get a double take and a laugh of disbelief. Why a bonfire? To emphasize the reality of the end goal, the working application itself. Users don’t use wireframes. Also, I want to make the point that wireframes are dead and the application is alive, that decisions that may have looked great on paper may, in reality, not work when put in front of real users.
The Evolution of Wireframes
Wireframes need to evolve over the course of a project. For larger projects (3 to 9 months) I’ve found it useful to clarify both for myself and the business customer what to expect from wireframes at different stages.
In the nascent stages of a project, when ideas are being thrown about by all stakeholders and everything is in flux, lo-res wireframes may appear on anything from whiteboards, scrawled on the backs of napkins or scrap paper. Think of them as thumbnail sketches.
Lo-Res wireframes should be
Usually, I will gather these scraps of paper and translate them into digital roughs.
Note the “default” style for everything:
- Labels should be descriptive of the content. Avoid using real copy (except for logo)
- Content areas are grey blocks with the label in the top left corner
- Multimedia or graphics are indicated by a different grey box, with label centered in middle
- Buttons are indicated by rounded corner and centered label text
Not using copy is important at this stage because you don’t necessarily want the client focusing on details yet. It can be distracting from the larger goal, which is that you want to get agreement and sign-off on basic proportional representation of types of content and their general hierarchy. At this stage, you should be able to use this to answer the question: What are the primary, secondary, and tertiary content blocks on the page?
Once you have established what content needs to appear on the page and have a good sense of their general hierarchy, you can start to add more detail, in the form of more realistic headings, navigational copy, and basic indications of functionality.
As you can see, there’s a little more resolution here, like you’re adjusting the focal length so that you can make out more clearly the major elements of the screen. But you’ll notice that not everything is in focus yet, namely the body copy, which I like to represent as grey bars to give an idea of type density without getting bogged down into specifics yet.
I’ve found that once you start introducing dummy copy and dummy data, the viewer’s expectations and reactions start to change, regardless of your frantic entreaties to disregard such characteristics. There is a point at which our attention begins to hone in and we start to notice certain elements, unconsciously or not, and it takes all of your discipline and awareness as creator of these documents to modulate the viewer’s expectations through the blurring and smudging of various elements (at one point I considered applying a “Pencil Sketch” filter in Photoshop to my wireframes). But at some point, you will have to address these things, such as color, highlighting, typography, dimensionality, and data, which will bring your wireframes to to a higher level of resolution and expectation.
Tomorrow: I’ll dig up some Hi-Res wireframes and talk about clickable prototypes, the ultimate in high-resolution wireframes. Stay tuned!
(This is a continuation of Part 1 of “The Page is Dead”.)
What is a “page”?
We haven’t had to think about this question for the past three hundred years or so because it’s so obvious to everyone what a page is. We’ve just been staring at them so long we take them for granted.
It’s a sheet of paper.
OK, it’s a sheet of paper that has words and pictures on it.
A book or a magazine has a bunch of them bound together.
Wrong. Take a look at Jack Kerouac’s manuscript for On The Road (at left). Is this just one long page?
Well, if you want to get technical, you might say, then, that a page is an arbitrary unit of displayed information. The “page” is, in actuality, a convenience that has been established for you, the reader. Really, it’s a design artifact of the codex, which made its debut in the first century at a time when the scroll was the dominant format for knowledge. Actually, long scrolls were made up of “pages”, ie, individual sheets of paper or whatever material that was used, stitched together, but to my knowledge, there was no definite way of referring to specific pages by number until the bound book came along and replaced the scroll.
Guess what, dear readers? The scroll is back.
That’s right. You’ve probably scrolled (unconsciously) a few times already just to read this. First you fired up your browser and requested a document from the World Wide Web. The problem is your viewport (that is, the viewable area for content in your browser’s window) is somewhere between 800 by 600 pixels and, if you’re lucky, anywhere in the range of 2000 by 1000 pixels. Even so, your monitor is most likely 72 pixels per inch and this means your resolution is low, seeing as a standard printed page is somewhere between 150 and 600 dots per inch (assuming 1 pixel ~= 1 dot). Fewer dots/pixels per inch means fewer words at a legible size per screen. This is why we scroll. We scroll because most documents are longer than can fit in one viewport unit.
When we talk about a “web page” are we simply referring to all of the rendered content in one HTML document? I assume that’s what advertisers refer to when they talk about “pageviews” — how many people have requested the HTML document that has my ad embedded somewhere in it? Again, I question the use of the term “page” here, as others have questioned the relevance of such a measure as the pageview itself. Businesses that lash themselves to arbitrary metrics without doing some honest thinking about what they’re really trying to do and practice “maximizing pageviews” to increase ad inventory deserve to fail in the coming years. Some have even called this practice of pagination evil. In any case, the page is dead, and so is the pageview.
Who killed the static page?
We now have to move towards thinking of “pages” as “screens” or even something like “views” — not “view” as in “page view” but “view” as in “a view” of something, as in a representation of data. (See Model-View-Controller.) The more sites move towards behaving like actual applications and are focused more on interaction versus mere presentation of content, the more urgent it is that we develop a better vocabulary for talking about what it is we are building.
We are no longer bound by the page refresh model. With this change, new user interaction idioms have surfaced on the web.
– Bill Scott, Director of UI Engineering, Netflix
“The page is dead. Long live the page!”
Over 4 years ago, Karl and I were brainstorming a piece of software what would eventually come to be called “Morpheus” (more on this later). We had locked ourselves away for a week inside a small windowless room at Community Connect Inc. where we were both employed at the time, with only a whiteboard and some paper and our ideas. CCI was embarking on a gigantic structural overhaul of their back-end and front-end architecture at the time, which would affect millions of users across three different community sites. As the information architect and lead front-end developer, we were faced with the task of re-imagining every single interaction and functionality and somehow representing these changes visually. That is, we had to make a crapload of wireframes.
“What? You want wireframes for every single page?!? Are you kidding me?”
Any IA who has faced this knows it is a sisyphean task. “Oh, you forgot the state when a user is logged in but has no friends so they don’t have access to this widget on that section.”
This problem runs many layers deep. The Business Customer knows (very generally) what he wants the application to do; he just can’t paint the picture accurately enough but, like pornography, he knows it when he sees it. That’s where the Information Architect comes in (I use the term more as a role than a job description since in reality most IAs wear many hats). The IA is responsible for translating these nebulous “business requirements” into visual blueprints which are then consumed by a whole host of people:
- Visual Designers, whose job is to put the flesh and makeup on the bones
- Front-End Developers (FEDs), who translate the wireframes into structural, semantic HTML first, then, after the Visual Designers are done skinning, slice and dice the PSDs or Illustrator files, translating the design into working web pages with CSS
- Back-End Developers (or Developers or Engineers or Programmers), who use the wireframes to determine how the databases that serve the content get structured, and what functionality needs to be coded to extract every element of data in order to populate every page
- The Business folks (including Sales) who use the wireframes to present their product idea to their board or to investors to get funding, and to sell the idea to potential customers
- Usability (User Experience Experts, Interaction Designers) who use the wireframes to actually test the screens and interactions with actual users to see if they make sense
Faced with the prospect of modeling every single state of every single “page” in a community website that had a panoply of features ranging from chat, personal pages, notes, news articles, not to mention all of the administrative pages that needed wireframing, Karl and I were convinced we needed to approach this problem in a different way. We had to come to terms with the fact that the “page” was no longer what it used to be.
To be continued…