Tuesday, April 24, 2007

Google's Web History: movement toward non-universality?

Google's recent announcement of their new "Web History" feature has been greeted with the usual chorus of privacy concerns. While these are undoubtedly relevant, the significance of this announcement strikes me somewhat differently. The progression of Search History into Web History is a subtle but significant conceptual shift, and may indicate--for better or worse--an gradual paradigm shift in Web use.

Search History is an archive of what I've sought, what I wanted but didn't have. The content is inherently local: as an agent-based concept it is generated by me, and typed by my fingertips. It is a history which is inherently, though perhaps counterintuitively, public. A query is not a query until it is uttered or typed--at which point it is no longer contained by my person. A history of my searches is a pointer from my mind toward the outside world.

Web History is not a simple progression but a significant transformation--a history of what I have (presumably) seen. The content is inherently remote, as the content which I have viewed is neither created by me nor stored locally. And yet, the concept is also private; what I have seen exists only in my mind, though it has external referents. A history of my web viewing is a collection of outside material pointing to my mind and memory. [This is where privacy concerns are especially valid--search history is circumstantial and implicative; viewing history is explicit yet still lacking context.]

My use of locality here refers to the concept of the history being stored, not the location of the bit-lists themselves (which could theoretically be either local or remote). In short: search history is what I have asked for, external; web history is what I have seen, internal.

The reason locality is of interest is its interaction with universality. Since the early days of the Web, what's true for one user has, by and large, been true for all others. Web History is a small but clear step toward a Web which may not be universal. The creation of fully-fledged "My Web"s, self-referential borrowed universes which are no longer merely lists of interests but rather lists of experience, contemplates the existence of a distributed yet no longer universal Web. The local public--i.e., search history--is potentially accessible remotely and by all, though unlikely to be of interest to most. Web history, on the other hand, is a remote private which is of no use to anyone other than its creator, as it is merely a list of potential, rather than actual, experience--yet it invites expansion and re-exploration by its creator. And therein lies the possible danger to universality.

I'm not disparaging this new feature or suggesting against using it--it is, in fact, precisely an idea which I had a couple years ago which I nebulously wished for Google to develop. But it may be useful to consider its implications. Google's announcement is not the first or only factor in such a paradigm shift, nor will they be solely responsible for whatever shifts occur. However, it is an indicator, a fin breaking the endless ocean surface of data and moving toward a distant shore.

Labels: ,

Saturday, October 08, 2005

Charge It

When was the last time you heard someone mention that their mobile phone battery had run out, but they hadn't noticed (either until they tried to make a call, or someone asked them why they hadn't been answering their phone)? I recall, years ago, this being a fairly frequent claim. Perhaps not always true, but common enough to always be plausible.

Today the claim is less common, and less plausible. Having been incorporated more fully into our routines, we notice more quickly when our charge is low or dead. Certainly there are still some absent-minded folk who forget to charge their phones, but today they are the exception rather than the rule. I wonder if, in the early days, people regularly forgot to gas up their cars?

On the other hand, another (somewhat contradictary) experience which many people will share: having a dead phone battery, and access to another mobile or landline, which is of no value because the number you need is trapped inside your phone's lifeless corpse.

Thursday, October 06, 2005

Mini Book Review (shoddy on my part)

Slow brain day, just reviewing old reading notes. I'll put on my literary criticism hat and review a few statements taken from Hamlet on the Holodeck: The Future of Narrative in Cyberspace. I read this book some time ago and seem to recall enjoying it, but when I look back on the quotes I jotted down, I seem to have changed my position. So, with minor apologies to Ms. Murray for not addressing the work in full:

A major point the author makes is that virtual worlds, exemplified by the Star Trek holodeck, are "enticing but unenslaving". We are "in control of the mirage". I'm not sure I can agree with that. Nor would many Koreans, whose countrymen are dying in front of their computers at a surprising rate. I've felt the siren song of fictional digital worlds, with their implicit encouragement to forsake other activities. I've mostly triumphed in these confrontations, but I've watched others fall to it (though not in the literal Korean sense). We are, in fact, not in control of the mirage. This is neither novel nor inherently dangerous, however--we're not in control of the "real world" either. Her claim implies a clear-cut distinction between fiction and reality which many people don't share.

A line I probably nodded in agreement with as I first read it: "The commitment to any particular story is a painful diminution of the intoxicating possibilities of the blank page." Now I realize that only a literary academic could compose such a sentence. In the practical world of gaming and virtual worlds, players most certainly want a particular story. There are virtual worlds with, and virtual worlds without strong narrative, but the presence of the former shouldn't overshadow the greater popularity of the latter. The trick, actually, is that the two aren't mutually exclusive. You can provide players with a story, and then all the blank pages they want to amend, interact with, and add to that story. Providing a particular story in the first place creates an anchor for all those blank pages. Virtual worlds aren't entirely different and separate from the analog world--they are extensions.

"When we enter a fictional world...we do not suspend disbelief so much as we actively create belief." True enough, but this implies that we don't create belief in non-fictional worlds--when we do all the time, and rely on it. I believe that you love me. You believe that aliens ate your dog. The creation of belief is not exclusive to fiction (digital or analog), and shouldn't be analyzed as such. Literary analysis of technical creations has huge potential offerings that the social sciences neglect, but this book isn't at the top of the pile. (One book that is at the top of the pile is How We Became Posthuman. A dense but rewarding history of cybernetics from a technically knowledgeable, but literary vantage point. And if you think that cybernetics don't interest or affect you, this book will show you how wrong you are.)

Monday, October 03, 2005

Power Tools and UCD

Donald Norman makes the point in a recent ACM article that user-centered design has various drawbacks. This is mild heresy in some corners today, where "user-centered design" is the latest buzzword. But he makes a convincing argument for activity-centered design (rather than UCD) in certain cases. He notes, for example, that violins and pianos (and musical notation in general) would not pass muster of a UCD review. Having played both instruments for years and suffered accordingly, I can vouch for the truth of that assessment. But those instruments aren't designed to be easy to use, they're designed to perform a particular purpose, and to do it well. A violin which was comfortable to play would no doubt prevent virtuoso performances.

Tangentially--to what extent are the arts appreciated as such for precisely the fact that they are not easy to learn, perform, and master? In the past, both production and use of many goods took practice and skill. Today, everyone is working to remove skill from use and place the bulk of it in design, production, and marketing. And, overall, this is undoubtedly a benefit.

The key word is "overall". UCD undoubtedly makes products easier to use. This is often inaccurately referred to as making the products "better". Actually, they are better, but in a narrow sense: they are better products for the mass market (and they will hopefully sell better). They work more easily for most people. If sales and market share are your goal, then this is probably the route to take. But it's important to realize that this form of "better" doesn't mean that the product is better at doing what it does. On the contrary, it will often be worse: less efficient, less powerful, less flexible.

Of course it's not always this way. As Norman points out, good design doesn't allow methodology (regardless of particulars) to dictate outcome. It should be taken as input or suggestions, not law. And it may be that UCD is an excellent tool for iterative tweaking of ACD-conceived products. The happy reality is that today, individuals often have a choice: the market provides UCD tools as well as the more difficult power-user tools.

I'll bet you've never thought of a violin as a device for power users.

Friday, September 30, 2005

Sentiencity

"Do you think a city can control the way people live inside it? I mean, just the geography, the way the streets are laid out, the way the buildings are placed?" --Dhalgren

Monday, September 26, 2005

Bronze Waterfall


Back of building in London, near the cigar building. The front doesn't look like this. A shame to hide these lines in the back.

Saturday, September 24, 2005

Literary Lookup

The Oxford English Dictionary is my favorite reference tool in the world. My second favorite is a particular kanji dictionary. After the OED, a good thesaurus comes in handy (actually, I have yet to find what I consider a good thesaurus).

Another linguistic reference which some might find handy is one which allows you to specify a noun, and be presented with a list of verbs which commonly act upon it. Or specify the verb and get nouns. Or nouns and adjectives. The point is to be able to specify related and/or surrounding words, rather than the word itself. This is for when you have a sentence in mind (or even written) save for one word, which you know you know, but can't recall. I know I'm not the only one.

Do I even need to point out that this is a slam-dunk job for Google, who already has a zillion sentences indexed?

Friday, September 23, 2005

Popularity vs. Value

The recent action at the New York Times to make their flagship columnists available online only to paying subscribers is a good case example of a tactic that is repeated elsewhere, by many different types of service and product providers, from time to time. The logic runs like this:
-the NYT wants additional revenue
-it justifiably considers the fact that their online readership exceeds their print readership, and tries to leverage that demographic
-their op-ed columnists are by far the most consistently emailed and linked features
-so...charge for the most popular content!

The overall reaction seems lukewarm at best. The Times has yet to announce any subscription numbers, which implies underperformance. But this should come as no surprise. Jay Rosen neatly sums up the particulars of the case:
The value proposition there is muddled...do we value Nicholas D. Kristof's column more if he's an "exclusive?"

We don't. In fact, it's probably the reverse. If everyone is reading a columnist, that makes the columnist more of a must have. If "everyone" isn't, less of a must. "Exclusive online access" attacks the perception of ubiquity that is part and parcel of a great columnist's power.
Perhaps the NYT is mistaken as to their own importance. Rosen's business analysis is apt, but doesn't speak directly to customer action. I think there's a more general principle that can be extracted here.

The powers at the Times obviously have equated popularity more or less directly with value. The columnists are popular and widely read, therefore they must be of value to the individual, right? Sometimes this is certainly true--Harry Potter has made a hobbyist housewife into a billionaire author. But this relies on social networking effects for its strength. Many schoolkids (and adults) can't bear to not be part of the in-group when the topic of Harry Potter comes up. They'll pay; not just for the pleasure of reading the story, but also for the social potential it brings.

Within the chattering class, this may hold true for NYT op-ed columnists. But what percentage of the readership numbers do they account for? I suspect that most people reading and even emailing these articles could do without them--although the NYT is betting that they can't. The flip side of this value equation is the value of the individual's time. Many of us have amounts of time here and there which we don't value very highly. In modern life especially, this can come in small chunks. Idle minutes at the office between tasks is a daily reality for many workers. Many of us read favorite sites online during this time. If a lot of those people read the NYT columnists, it is not necessarily because they are deemed important, but that they can quickly and cheaply fill some quick, cheap time. If either the time or the reading is no longer cheap, the pattern dissolves.

This behavior also applies to larger chunks of time. I've played more than one free MMORPG, and enjoyed the hours I've spent on them. But when they become premium services, I won't pay for them. This is not because I'm so poor or cheap that I can't--but they're just not worth it to me. In other words, I use the service because it is free, not because it is a valuable service. It is a good match for my cheap time, when I have it, but a premium service, even if higher quality, would force me to recoup my losses, so to speak, with greater time investment that I'm just not willing to give.

Hyperhouses

Interesting temporary urban art in Rotterdam where a block of houses marked for demolition has been painted bright blue. To me, the building looks like a hyperlink amongst a row of plaintext houses. Makes you want to click on it and see what it contains. Which, while not the artist's intended metaphor, does seem to be his intent.

Thursday, September 22, 2005

Social Scaffolding

Infrastructure:


Ultrastructure:


I was looking for a word which was the opposite of "infrastructure", and playing the Latin root game instead found an obscure biology word. Ultrastructure refers to the fine-grained, minute cellular structures that comprise tissue and such; sort of infrastructure's infrastructure. Seemingly the opposite direction I was intending.

But the diversion allowed me to more carefully consider infrastructure. The other word I'm looking for is for the human (inter)action with the built infrastructure. The social scaffolding into which infrastructure is built and operates. The complement without which the infrastructure has no meaning. For now I'll call it "amphistructure", because this structure should have two sides, one which interacts with the infrastructure as necessary, and the other which interacts with the social structure. Interactions can happen on either side, or may require connections from one side to the other, and back again. I suspect that a circular model might be more appropriate, but I just don't see it yet.

And it may be that ultrastructure is what connects the amphistructure to both sides. The technical ultrastructure pictured above is what we interact with at a human level. Likewise, the social ultrastructure that Erving Goffman and other sociologists examine connects the other.

An interesting implication of this way of thinking are the points of contact between the structures. It allows for different conceptual analyses of socio-technical use. Rather than seeing the mobile phone as a tool through which social relations are (more or less) transparently carried out, we can see the phone as the ultrastructure connecting amphi- and infra-. The infra- then connects back to the amphistructure at a different location, creating a gateway to the sociostructure to complete the activity. The infrastructure thus expands the range and points of contacts available from within the amphistructure for all those interactions not available strictly on the sociostructure side of things.

Still sounds a bit muddled. Maybe diagramming it out will help. The ultrastructure seems to be where the interesting things happen.

Wednesday, September 21, 2005

Philip Morris hates Vodafone

A report from last month indicates that mobile phone use may be responsible for sharp declines in teen smoking. The analysis focuses entirely on economics: mobile phone bills are expensive, teens have limited cash, talking is more important than smoking, so they pay the phone bill rather than the Marlboro man. Nice and neat, and surely a fair enough assessment.

But I can't help thinking that this explanation is incomplete. After all, teens especially are not the rational actors that most economists pretend people to be. There's another important angle here: mobile phones are cigarette surrogates.

Think of the conventional wisdom for quitting smoking. Hold a pencil in your hand (to give your fingers something to hold). Chew gum (to keep your mouth occupied). Now the mobile phone: fits in hand, can be held aimlessly for hours? Check. Gives your mouth something to do? Check. Smoking also helps you look cool (my theory on this is that smoking makes you seem cool because it gives you an object with which to ignore the immediate surroundings. But it's just a stage prop. The truly cool don't need the object), and similar posturing can be achieved with proper phone usage. The phone can likewise lend an air of mystery or rebelliousness, depending on its use.

So my guess is that the economic angle is well and good, accurate so far as it goes. But it's not complete, and it so happens that the mobile phone replaces cigarettes more than just economically. Is an SMS an equal tradeoff for a smoke? Is a quick chat worth 2? Is pulling out your phone an acceptable alternative (in terms of social positioning) to lighting up, in the high-pressure world of teen politics? My guess is yes, yes, and yes.

Monday, September 19, 2005

Inhuman Agency Insufficient: Counterpoint

I'm a pathological Google user, and I'm quick to praise the company and its tools. But even the excellent search engine rewards (and occasionally, requires) finesse to find the desired results. Understanding that [cook bake pie] and [pie bake cook] will bring up similar results, but differently ordered, is important. This is precisely the catch-22 of all abstraction layers: they can make information more accessible, but less accountable for its origins. Because the Google search engine second guesses my intent, I have to second guess it in return when I form my query. This is not a complaint; because the Google engine is constantly adapting its output to the human use of information, rather than the information itself, I suspect that the Google method has more promise in this area than the major proposed near-future companion technology--the Semantic Web.

The Semantic Web, for all of its interesting propositions and potential benefits, has a major weakness in this analysis. By making more information machine-readable, and proposing (necessitating, actually) that machine agents then assume a greater role in information search/retrieval/presentation, it removes a critical aspect of quality control. The visual presentation of a web page provides countless subtle clues that inform our assessment of its quality, reliability, and authenticity. There are a thousand shoddy-looking scam sites out there for every rare convincing one. Likewise, by stripping the immediately desired information from the flow of presentation in a particular source, important context clues are removed. Even the most paranoid alien-conspiracy sites will occasionally have a bit of sane-sounding output, if removed from the source. But the source is important, and many of the most important qualities of a source can't be analyzed with metatags.

This is not to suggest that in some way, the SW won't happen, or that it shouldn't. But it won't be a magical cure for the world's information organization problems. Google, of course, will be among the first to usefully utilize new semantic metadata, which will probably be a big help. And of course the two methods aren't mutually exclusive. But the Google approach says: let us have a look at everything, however arbitrarily arranged, and we'll make it useful by observing how it is used, and basing our output on that. The SW says: follow our recommendations for how your information should be marked, and if everyone complies, then the information will automatically be more useful. But metatagging everything won't, on its own, make information organization easier, and certainly won't ease quality assessment. On the contrary, it will require ever-expanding infrastructure to manage it. The lengthy debate over HTML use and misuse (re: standards) will simply be moved behind the markup. Human, contextual assessment will remain the final judge of information's value, and we should be careful to not (accidentally or intentionally) try to remove it from the picture, in the name of simplification, efficiency, or usability.

Saturday, September 17, 2005

Inhuman Agency Insufficient

It seems that in the short-to-mid-term future, information--on everything from the trivial to the global--will be easily accessible, and in staggering quantities. From one perspective we are already at that point, but I believe that in the near future, more and more information will be meaningfully metatagged, allowing more meaningful results than today's simple web searches (how and if we get to that point is another discussion entirely, the realization of which will be largely taken for granted here).

The question is what will become of all this information. I'm thinking more broadly than just information presentation at the HCI or cognitive psychology level; but what ways can vast amounts of data be sifted on demand to create powerful results, in an intuitive form? The important difference is between already existing tools for specialized and power users, and tools for the average user. The 'digital divide' of the future won't be between those who do and don't have simple access to the internet--the meaningful difference will be between those who have the ability to *use* the staggering quantities of information (already) available to them, and those who are simply 'surfing' the net.

I would argue that this has always been the case, from the beginning; access to IM and email admittedly makes available new habits and lifestyles; but the simply communicative applications of the present--meaningful as they are--are already becoming a social commodity. The next step will be to allow the individual to make the world (the world's data) work for them in ways that are comfortable: social, intuitive, responsive ways. At the heart of this notion is the idea that 'smart' or 'intelligent' or 'agent' systems of the future (at least the mid-term future) will not be able to deliver their much-speculated benefits entirely on their own. Problems concerning human agency, context, and classification may not ever be entirely automated. Thus the person who understands the conceptual background and organization of the world's information will always have an (increasingly larger?) advantage--socially, economically--over the person who is at the mercy of such systems' recommendation.

The question to be considered: can incisive (social and technical) understanding combine with good design to create power-user tools for everyone? To what degree can "knowledge in the world" compensate for the expert's understanding, of both subject background and technical mastery?

Wednesday, September 07, 2005

Daydream

"Just as an individual person dreams fantastic happenings to release the inner forces which cannot be encompassed by ordinary events, so too a city needs its dreams." --A Pattern Language

Wednesday, August 31, 2005

Not All Experts Have PhDs

Expertise comes in so many forms--I just saw a garbage collector empty a bin on a street corner; the type with a large hard plastic container that sits inside a more aesthetic permanent fixture. He took out the plastic can, walked to the truck and emptied it...then heaved it 20 feet through the air into its metal streetside housing. The pitch looked unlikely at best; it wobbled and spun in the air...and landed absolutely perfectly, nothing but net so to speak, saving him the time and effort of the expected course.

Corollary: the tyrrany of bureaucratically derived standards*: the way in which the lowest common denominator becomes the mean, or even above. The bell curve atrophies when manipulated. There is no doubt an ISO 9001 procedure which describes the proper and approved way to replace an empty trash can into its receptacle. Happily, this standard was ignored.

*I'm not blindly railing against standards in general; on the contrary, they are absolutely vital to everyday life and we couldn't do without them.

Saturday, August 20, 2005

Social Bandwidth, Historical Progression of

Some disorganized thoughts on the bandwidth* involved in human communication. As history unfolds, we have continually expanded our social bandwidth (for interpersonal communication) over distance:

post/mail
telegraph
telephone (now portable)
internet (now semi-portable)
[future?]

But it is most likely the advent of synchronous communication (starting with the telephone), rather than sheer bandwidth, which was most responsible for quickening the pace of social change. Synchronicity also increases the time obligation and social burden of communication:

Stratification of depth of contactSynchronicityTime burden/Social obligation
post/SMS/email (one-way text)asynchronousminimal
voicemail (one-way voice)asynchronousminimal
IM (two-way text)semi-synchronouslow
telephone (two-way voice)synchronousmoderate
videophone/webcam (sound+sight)synchronousmoderate/heavy
VR (future progression?)synchronousmoderate/heavy
"real" F2F contactsynchronousheavy

But one thing which seems apparent now is that the bandwidth required for meaningful social interaction is quite low. This actually pleases mobile phone carriers: they never expected SMS to be popular, but they are happy with their enormous profit margins (bit-for-bit, roughly 10,000x the margin on voice calls. Yes, ten thousand times). It also confuses them: if people don't need high bandwidth to communicate, and communication is the nearly exclusive use of mobile phones, why would they pay for high-bandwidth media services? Thus their conundrum. The bandwidth needs of communication haven't changed significantly since the 19th century, but the available bandwidth in increasing. What's to be done with it? Mobile carriers are going to have to figure out how to carry new traffic over their networks to prevent themselves from becoming a commodity service. But I digress.

Synchronicity, immediacy, and throughput: today we have to ability to select the best medium for our purposes. We can micro-customize our social interactions to a degree unthinkable 20 years ago. What's my purpose in saying this? I'm not sure. But it is interesting to consider the interrelationship of these three qualities, and the varied social demands they create.

One other point worth mentioning. In general, synchronous, immediate contact has the highest social obligation. But this is already changing. Face-to-face conversations today may be cut short by a mobile phone, and if the call is urgent, may put an abrupt end to the meeting. This rather novel phenomenon of being socially trumped by a non-present party is becoming increasingly socially acceptable.

Further thoughts later on the digitization of personal space and time.

*"Bandwidth" is defined loosely here as a multiple of speed and communicative throughput potential. I know that makes CS people cringe.

Thursday, August 18, 2005

Google Tool Suggestion

I know I'm not the only one who's had this problem: you follow a link, or select an old bookmark for a site, and come up with a 404 error from the server. If you're "lucky", you'll be redirected to the site's main page, If you're really lucky, you may be redirected to the site's internal search engine. Common occurrence, right? Often happens after a major site redesign.

The problem is that you often don't know (or can't remember) enough about the page's content to form a useful query. Even if you can, many sites' search engines are terrible.

So--when (re)designing a site: archive the entire old site. Now, it would be possible to simply make the old pages available with a warning that they are outdated, but this might be undesirable for many reasons.

So--keep an archive of the past site's structure and content, and when a request for an outdated page comes along, run a comparison of that page (which is available only to the server) and find the best current match. Offer this to the user with an explanation. This makes the unavailable information useful, at zero cost to the user. Best of all, it doesn't require an explicit mapping of old URLs to new URLs. A "correct" match should be easily accomplished in most cases, although I don't have the algorithmic expertise to properly say. This also improves upon the Google cache solution by aligning the user with the new site paradigm, rather than just presenting the old information (which can be useful as well, for different needs).

Note to Google: my services are available for a reasonable fee.

Tuesday, August 09, 2005

Reality Break

"The subconscious did not exist anywhere until Freud wrote about it at the beginning of the twentieth century" --from The Art of Reading Dragons

True? False? An interesting starting point for long-winded discussion at any rate. On the same theme, from an entirely different angle, is the possibile time-irrelevant nature of quantum action. Choices today can(?) affect the past. Greg Egan takes this idea and runs with it on a macro scale in Distress.

Monday, July 25, 2005

Overhead

Recently I had occasion to hook my laptop up to a television, to watch some video, and afterward left the connection active for a while even though I was working on the laptop screen. Something which struck me fairly quickly: The desktop, mirrored on the TV screen, became a different object entirely--foreign and nonsensical. The television in particular seems key here, because I've never had this impression when working on a projected LCD screen. It may be that my learned expectations for what I see on TV are more fluid, finely honed, and crafted for smooth entertainment.

Most noteworthy were the cursor movements. What seems quick and efficient on the desktop was painfully slow and awkward to view on the TV screen. When I became a viewer rather than a user (even though I was still controlling the mouse) I lost the cognitive excuse for what were suddenly interminable delays. The second or two to move the cursor from point A to point B were frustrating because the shift in viewing medium confounded my mind's usual subconscious tricks whereby I direct my attention to the effort required to accomplish the task, rather than the task or the goal I want to accomplish.

This made me consider the efficiency of our current input devices in a new light. Even HCI experts generally take for granted that the mouse is an efficient input device, and focus on other aspects of the system. But how much is it really? Something more than a keyboard is required to navigate a 2D GUI, but I wonder how much time we spend in a day actually moving the mouse, rather than working with what it brings...Eye movement detection is the holy grail here, but is likely still many years off. What alternatives are there today?

Tuesday, July 12, 2005

GUI Gripes

Specifically, XP gripes. It is frustrating that with all that 'innovation' going on at Redmond HQ, the use(r) model for the current version of Windows seems little changed from version 3.1. The novelty of being able to run more than one application at a time has worn off, but the limitations have not been loosened much.

Now these are broad, sweeping statements, and I don't want this to turn into a rant. The particular thing I'm thinking of is the lack of ability to customize, or organize, these multiple windows in a way the user feels most comfortable, productive, or accessible. Nearly all alternate OSes offer multiple window managers, so that applications, folders, and documents can be grouped conceptually according to the user's desire. Even more basic would be the ability to drag and drop items within the taskbar (like Firefox tabs), rather than forcing them into the chronological sequence they remain locked into. The default 'stacking' of repeat applications is even worse, with windows being hidden, and confusing their order.

Perhaps the average user doesn't share my dissatisfaction, but as someone with a dozen or more windows open at a time, I find the visual aspect of application management to be sorely lacking. Minimizing and maximizing is no substitute for organizing.

Friday, July 08, 2005

The Other Digital Divide

The general form that discussions of the "digital divide" take revolve around access: who has it, who doesn't, how economic underclass status correlates with (lack of) access, and so on. This is well and good, and a fine topic for traditional sociologists to take up. But the form of divide which strikes me personally revolves more around literacy: a man who owns a thousand books but can't read is more informationally disadvantaged than the literate man with no books.

I know people who own computers, and have internet connections...but never use them beyond editing the occasional document or spreadsheet. What's more, they have no interest in doing more (or learning how). To some extent this seems to be a generational artifact: children today gain a great deal of digital literacy through cultural osmosis. So this particular divide will likely solve itself in due time, at no great loss.

But the same might be said of the access gap. As technology becomes cheaper and more pervasive, it will be difficult to find people and places without access. Someone will always complain that the rich and well educated have access to better/faster access. This is true, and it is an unavoidable aspect of a market economy. The slope of availability is self-propelling and self-perpetuating. This same force created the consumer-level availability of the technology in the first place. Social critiques aside, it works well at doing what it does.

At this point, with pervasive access, the problem revolves back to literacy. It's part of a loop which we've been inside since at least the Gutenberg press. But the destitute literate will find a way to read (libraries today appropriately offer public internet access as well as books), just as the well-to-do will have means beyond their desires. This is in fact part of what keeps the system dynamic. My point, if I have one, is that digital literacy should take precedence over mere access. Literacy will find (or create) access for itself, but access without literacy sits dormant.

Of course, gaining literacy obviously requires at least minimal access. Additionally, the type of literacy I have in mind requires a certain investment of time, and exposure to literate culture. It is not limited to a technical/functional literacy, but inculcates an appreciation for current and future potential of the medium, specifically beyond one's personal uses. Overambitious perhaps, but I'll negotiate when I have a budget to meet. Teaching this sort of broad literacy is bound to be no easy task. My unlikely suggestion for top consultants on the issue: English (and other language) teachers.

Sunday, June 19, 2005

Ethnodesignography

The more literature I read on the topic, the more frustrating is the gap between ethnography and design. More precisely, the fact that the imagined bridges, while often eye-catching from a distance, seem incapable of actually bearing any load--even that of their own weight.

This gap, this "design canyon", actually has two difficulties to surmount: The (only somewhat metaphorical) physical separation, and the (even less metaphorical) fact that the two populations actually speak different languages.

Ethnographers, for their part, are almost to a one hesitant and ambivalent about actually making a bridge. Most often what they do is construct a lovely scenic vantage point on their edge of the canyon and call it a bridge. These are easily enough detected (except by their creators, apparently) by the realization that they don't actually allow either side to step closer to the other. At best, they allow one side to imagine that they are speaking the language of the other. This can allow a certain self-congratulatory air of achievement without the threatening reality of ingress by the designers.

This physical threat seems equally strong on both sides. Neither wants to risk allowing the other into their business, making troublesome suggestions and asking constant questions (though the latter would certainly be of benefit to both in the long run). What each side seems to want is a bridge that acts as a (value-)neutral territory into which communications may pass, be magically translated, and emerge in the terms of the other without causing any disturbance.

But what seems to be the larger problem is language. The social-practice separation can frankly be easily enough solved, but only if they can communicate directly. This has a long history of historical (literal) parallels. My indictment--of the ethnographers, mostly, although what I know of the design side is hardly more impressive--is that its practitioners are pretending that a communication infrastructure is sufficient, and paying little attention to their inability to craft content. This is made worse when they make some obscure and arcane gestures which allow them to believe they are speaking the language of the other, when in truth they are merely undertaking an elaborate ritual which reinforces their own practices.

On the other hand, I have no surefire solution. The best (and only) productive route which I see out of this impasse is to create ambassadors; brave souls willing to immerse themselves in the foreign habits, understandings, and detailed knowldege of the other so as to become human bridges. Perhaps then these human bridges can design a means of structured communication. What will not work is continuing to believe that each side can start their own bridge, with no understanding of the others' plans, and meet successfully halfway.

Sunday, June 05, 2005

Zoom

It would be easy enough to pick a certain lens through which to view the mundane, so as to give a "perspective". I'm an eager enough consumer of others' examples of just that. But how to not use a lens, and still have something to offer--is that naive? Or just uneconomical...

Later: I think it is naive after all. To continue the metaphor, everything that creates an image requires a lens, the human eye included*. The perspective of the lens is necessarily limited and arbitrarily biased (to a certain field of view, depth, focal range, etc) but at least the image is clear. A more appropriate goal might be to vary lenses regularly. Having a quality large zoom lens would be a good start.

*yes, I know that pinhole cameras don't need a lens, but I think I've burdened the metaphor enough already.

Thursday, June 02, 2005

Illiterate hammers

Every now and then--predictable in hindsight yet seemingly always a surprise--the limitations of a particular technology that we take for granted utterly fails to fulfill its purpose. Case in point: the telephone, among the most quotidian and useful of electronic devices. The telephone allows us to communicate over any terrestrial distance by whisking a voice first this way, then that. We use it every day and fully expect it to project our intent like a hammer. Sometimes nails are bent through imperfect application of that force (or uncooperative wood), but in the end an understanding can usually be worked out and something is held to something else which it wasn't previously.

But what if a nail absolutely didn't speak "hammer"? What if you picked up the phone, dropped it deftly on the nail above soft wood, and it remained utterly unfazed? What would you do if all the nails available for the task at hand were fully incompatible with your hammer? You quickly realize that your hammer isn't up to the task of nailing, and what's more, even if the nail tries to explain itself your hammer remains unenlightened and useless, heavy in your hand.

But that's if you're trying to hammer a nail over the phone. In person, body language and other tricks can be used to coerce just about any nail you can find. Audio has its limitations, and one of them is just there. In person, we have available quite a number of communications protocols if the first one fails. With voice, we're stuck with just one port.

Which makes me wonder just how small a bare minimum universal communications protocol for humans might be. Not a full-fledged Esperanto; really minimal. 100 nouns, plus proper names as appropriate? 20 verbs? 50 modifiers (colors, qualifiers, adjectives)? With that I could say, "Where is X?", "Do you own a car?", "That is my book," and so on. No real grammar would be necessary.

I don't know of any such thing that exists. I wonder if it even could exist, being so small and unattached to a culture or population. If people taught it to their children, the kids might start to replace the words from their own language, at least at times, and a lot of people wouldn't like that. But if people didn't use it at least very occasionally, it would be forgotten and useless.