Monday, July 25, 2005

Overhead

Recently I had occasion to hook my laptop up to a television, to watch some video, and afterward left the connection active for a while even though I was working on the laptop screen. Something which struck me fairly quickly: The desktop, mirrored on the TV screen, became a different object entirely--foreign and nonsensical. The television in particular seems key here, because I've never had this impression when working on a projected LCD screen. It may be that my learned expectations for what I see on TV are more fluid, finely honed, and crafted for smooth entertainment.

Most noteworthy were the cursor movements. What seems quick and efficient on the desktop was painfully slow and awkward to view on the TV screen. When I became a viewer rather than a user (even though I was still controlling the mouse) I lost the cognitive excuse for what were suddenly interminable delays. The second or two to move the cursor from point A to point B were frustrating because the shift in viewing medium confounded my mind's usual subconscious tricks whereby I direct my attention to the effort required to accomplish the task, rather than the task or the goal I want to accomplish.

This made me consider the efficiency of our current input devices in a new light. Even HCI experts generally take for granted that the mouse is an efficient input device, and focus on other aspects of the system. But how much is it really? Something more than a keyboard is required to navigate a 2D GUI, but I wonder how much time we spend in a day actually moving the mouse, rather than working with what it brings...Eye movement detection is the holy grail here, but is likely still many years off. What alternatives are there today?

Tuesday, July 12, 2005

GUI Gripes

Specifically, XP gripes. It is frustrating that with all that 'innovation' going on at Redmond HQ, the use(r) model for the current version of Windows seems little changed from version 3.1. The novelty of being able to run more than one application at a time has worn off, but the limitations have not been loosened much.

Now these are broad, sweeping statements, and I don't want this to turn into a rant. The particular thing I'm thinking of is the lack of ability to customize, or organize, these multiple windows in a way the user feels most comfortable, productive, or accessible. Nearly all alternate OSes offer multiple window managers, so that applications, folders, and documents can be grouped conceptually according to the user's desire. Even more basic would be the ability to drag and drop items within the taskbar (like Firefox tabs), rather than forcing them into the chronological sequence they remain locked into. The default 'stacking' of repeat applications is even worse, with windows being hidden, and confusing their order.

Perhaps the average user doesn't share my dissatisfaction, but as someone with a dozen or more windows open at a time, I find the visual aspect of application management to be sorely lacking. Minimizing and maximizing is no substitute for organizing.

Friday, July 08, 2005

The Other Digital Divide

The general form that discussions of the "digital divide" take revolve around access: who has it, who doesn't, how economic underclass status correlates with (lack of) access, and so on. This is well and good, and a fine topic for traditional sociologists to take up. But the form of divide which strikes me personally revolves more around literacy: a man who owns a thousand books but can't read is more informationally disadvantaged than the literate man with no books.

I know people who own computers, and have internet connections...but never use them beyond editing the occasional document or spreadsheet. What's more, they have no interest in doing more (or learning how). To some extent this seems to be a generational artifact: children today gain a great deal of digital literacy through cultural osmosis. So this particular divide will likely solve itself in due time, at no great loss.

But the same might be said of the access gap. As technology becomes cheaper and more pervasive, it will be difficult to find people and places without access. Someone will always complain that the rich and well educated have access to better/faster access. This is true, and it is an unavoidable aspect of a market economy. The slope of availability is self-propelling and self-perpetuating. This same force created the consumer-level availability of the technology in the first place. Social critiques aside, it works well at doing what it does.

At this point, with pervasive access, the problem revolves back to literacy. It's part of a loop which we've been inside since at least the Gutenberg press. But the destitute literate will find a way to read (libraries today appropriately offer public internet access as well as books), just as the well-to-do will have means beyond their desires. This is in fact part of what keeps the system dynamic. My point, if I have one, is that digital literacy should take precedence over mere access. Literacy will find (or create) access for itself, but access without literacy sits dormant.

Of course, gaining literacy obviously requires at least minimal access. Additionally, the type of literacy I have in mind requires a certain investment of time, and exposure to literate culture. It is not limited to a technical/functional literacy, but inculcates an appreciation for current and future potential of the medium, specifically beyond one's personal uses. Overambitious perhaps, but I'll negotiate when I have a budget to meet. Teaching this sort of broad literacy is bound to be no easy task. My unlikely suggestion for top consultants on the issue: English (and other language) teachers.