Archive | On names RSS for this section

A brief note on “commoning”

I got taken to task the other day regarding my preference for the jargony-seeming construction “commoning” over the more usual “commons.” (The specific wording: “You say you hate bullshit, but ‘commoning’ seems like just so much bafflegab to me.”)

This brilliant 2010 interview with key thinker/doers Massimo de Angelis and Stavros Stavrides ought to go some distance toward explaining that preference; it’s lost none of its luster with the intervening years, despite everything that’s happened in the world over that period.

In the effort to define a space for living that is neither market nor state, De Angelis and Stavrides make it clear that the act of seizing and occupying it is the easy part. All the glamor and all the grandeur attend that first nervy moment when wirecutters meet chainlink. But precisely who gets saddled with the obligation of continuously remaking that space? Who’s left with the physical work of maintenance, the emotional labor of negotiation? It’s a process, not a reified thing, and that in turn seems to demand the gerund form, with its implication that this is something unfolding in time: commoning.

Yeah? No? Works for me.

Maxima/list

By infrastructure, one refers to every aspect of the technology of rational administration that routinizes life, action, and property within larger (ultimately global) organizations. Today, infrastructure can be argued to own a little part of everything. Infrastructure, at the very least, is the systematic expression of capital, of deregulated currency, of interest rates, credit instruments, trade treaties, market forces, and the institutions that enforce them; it is water, fuel, and electrical reservoirs, routes and rates of supply; it is demographic mutations and migrations, satellite networks and lotteries, logistics and supply coefficients, traffic computers, airports and distribution hubs, cadastral techniques, juridical routines, telephone systems, business district self-regulation mechanisms, evacuation and disaster mobilization protocols, prisons, subways and freeways and their articulated connections, libraries and weather-monitoring apparatuses, trash removal and recycling networks, sports stadiums and the managerial and delivery facilities for the data they generate, parking garages, gas pipelines and meters, hotels, public toilets, postal and park utilities and management, school systems and ATM machines; celebrity, advertising, and identity engineering; rail nodes and networks, television programming, interstate systems, entry ports and the public goods and agencies associated with them [Immigration and Naturalization Service, National Security Agency, Internal Revenue Service, Food and Drug Administration, Bureau of Alcohol Tobacco and Firearms], sewers and alarms, multi-tiered military-entertainment apparatus, decision engineering pools, wetlands and water basins, civil structure maintenance schedules, epidemiological algorithms, cable delivery systems, police enforcement matrixes, licensing bylaws, greenmarkets, medical-pharmaceutical complexes, internet scaffolds, handgun regulations, granaries and water towers, military deployment procedures, street and highway illumination schemas; in a phase, infrastructure concerns regimens of technical calculation of any and all kinds.

– Sanford Kwinter and Daniela Fabricius, from “Urbanism: An Archivist’s Art?”, in that old standby, Koolhaas et al. Mutations.

My back pages: Spimed

Originally published 17 October 2004 on my old v-2.org site. Very, very interesting for me to see how my feelings have evolved, and where they remain consistent; there are probably as many instances of the former as of the latter. Plus, all those “Sterlings” now feel so stilted and formal and unnatural. (Hi, Bruce!) At any rate: enjoy.

If spam simply isn’t annoying enough to suit your needs, or you’re the kind of person who’s disappointed by the disarming ease you encounter when upgrading your laptop’s operating system to a new version, then boy does Bruce Sterling ever have a vision of the future for you.

Refining the message of his much-linked speech from this year’s SIGGRAPH conference in a new piece for Wired, Sterling draws us a picture of a coming time when intelligent, deeply internetworked and self-authenticating objects dominate the physical world: an “expensive, fussy, fragile, hopelessly complex” world, where entirely new forms of “theft, fraud [and] vandalism” await us.

I preface my comments the way I do because Sterling isn’t warning us about this world. He’s enthusing about it.

To some degree, in the SIGGRAPH speech, Sterling’s thrown us a definitional curveball. Having previously defined a “blobject” as an artifact of digital creation “with a curvilinear, flowing design, such as the Apple iMac computer and the Volkswagen Beetle,” he now asks us to step back a level of abstraction, and understand the word instead to mean an object that contains its own history digitally. Possibly realizing that this bait-and-switch presents abundant opportunities for confusion, he rescues himself at the last moment by substituting for “blobject” a new coinage, “spime”: “an object tracked precisely in space and time.”

And then he proceeds to imagine a world in which this self-documenting, self-tracking, self-extending stuff he calls spime dominates utterly, or is allowed to become utterly dominant. (Whatever one thinks of this particular coinage and its descriptive utility, there clearly was the need for a word here. As Sterling quite correctly points out, this is a class of objects without precedent in human history.)

put this product into service

I have a lot to say about the notion of such chimeric object/product/service hybrids, both because I think Sterling’s onto something important and real, and because the direction he takes it in worries me.

He’s got unusually fine and sensitive antennae; as a novelist, fabulist, extrapolator, raconteur and ranter, he’s terrific. But as a designer and an organiser of design, oh…let’s just say Sterling’s taste leaves something to be desired. So when he starts talking about “an imperial paradigm…a weltanschauung and a grand schemata [sic],” for designed objects, my ears perk up.

And it’s when he suggests that we have little choice but to prepare ourselves for a world of

– spime spam (vacuum cleaners that bellow ads for dust bags);
– spime-owner identity theft, fraud, malware, vandalism, and pranks;
– organized spime crime;
– software faults that make even a mop unusable;
– spime hazards (kitchens that fry the unwary, cars that drive off bridges);
– unpredictable emergent forms of networked spime behavior;
– objects that once were inert and are now expensive, fussy, fragile, hopelessly complex, and subversive of established values…”

that I begin to get truly uncomfortable.

It’s not that Sterling’s identified the hazards improperly. Just the opposite: these are precisely (some of) the unpleasant eventualities we need to plan for in any setting of pervasive or ubiquitous “intelligence” (and which I discuss in an forthcoming article entitled “All watched over by machines of loving grace“). [Note: This article was essentially the genesis of Everyware.]

The problem is that he appears to be suggesting that “cop[ing] with” these headaches is about all that we can do, so obvious is the superiority of spime, and so inevitable its hegemony. Locked into technological determinism, he does little to challenge this here, beyond suggesting that, oh yeah, now that you mention it, this “imperial paradigm” might not necessarily be maximally convenient for its human subjects. This “ideal technology for concentration camps, authoritarian regimes, and prisons” is, yes, “a hassle. An enormous hassle.” But relax: “[I]t’s a fruitful hassle.”

With his unusually acute vision, Sterling can see something like this looming on the horizon and still be so cavalier as to suggest that, if we can only “cope with” these “hassles,” “spimes will be a massive improvement over the present closed, blind regime.” (Haven’t we heard all this better-living-through-chemistry noise before?) Such a stance strikes me as a not inconsiderable abdication of the role of anyone gifted with foresight. (It also strikes me as presuming a parallel abdication among designers, but we’ll get to that in a bit.)

It’s frustrating because I share, almost without exception, Sterling’s larger goals. He simply wants to save humanity from itself, from a situation in which we seem hellbent on drowning ourselves and whatever posterity we may achieve in tidal surges of our own noxious effluvia, and he’s looking for any help he can get from the technical side of the house. I get this, the essential good will undernetting the vision of spime.

But while I share a lot of Sterling’s faith in the ferment of human creativity, I’m not nearly as comfortable as he is with assuming that the results will always be “fruitful.”

the user and the used

I derive my suspicions not a little bit from what I know of the history of open-source software, in which applications that should by rights dominate their respective niches for their robustness or power or utility fail time and again to find the wider audience they deserve. I lay a lot of this to their user interfaces, which, designed by geeks for geeks as they are, almost invariably fail any other kind of user. The distributed nature of open-source creation seems to militate against the consistency required for a smooth, consumer-grade user experience.

Of course, one might point out that this inconsistency is inevitably implied in the core logic of open-source development, or anything like it: that notions of highly crafted user interfaces and content architectures are just so many farty, self-indulgent Rick Wakeman solos, bound to be cut down before the whirling DIY thresher of the new mutant thing.

Unless I’m badly mistaken, from what I’m able to gather from two decades of reading him this stance seems to capture something of Sterling’s position — that he doesn’t have much room for designers, mewling pitifully from the sidelines in all the impotence of their top-down, command-and-control obsolescence. Technology is destiny. The street will find its own uses; do what thou wilt shall be the whole of the law; great shall be the rejoicing.

It’s a weird thing to find myself on this side of history, given my other interests, and I’m not sure but that it may be a strategic mistake to even accept this framing of things, but here I go:

I do not believe that we want to live in a world where the best we can hope for is “wrangling” a surge of fast, cheap, out-of-control, autocatalytic blobjects. I simply do not believe that what we give up is worth less than what we are promised, even if what we are promised is delivered in anything close to full.

Control isn’t all DRM, you know. Control also means design with compassion, which is something whose complexities I believe we are just beginning to get a handle on. Control also means permitting (some) introduction of randomness in the service of a defined end. And for sure it means getting out ahead of foreseeable problems and taking measures to prevent their emergence.

To surrender this measure of control — to insist that all bottom-up, all the time is any kind of a path to a better world, and that all we can or should do is get out of the way — is fatuous, even negligent. (Indeed, “allowing otherwise avoidable dangers to manifest” defines negligence in the Anglo-American jurisprudential tradition.) Just in the last ten minutes, as I’m writing this, a correspondent tells me that an SMS-based survey inquiring as to who users believed the 100 Greatest South Africans to be had to be abandoned by its originators because the notorious fascist Eugene Terreblanche popped out at the top.

Importantly, I don’t believe that Bruce Sterling believes any such thing, either. I don’t think for a moment that he would propose that we accept, or accept himself, a situation in which people gave up all control over the things we build.

I just know, all too well, what happens to nuanced distinctions in the wild.

i contradict myself/i contain multitudes

Let me also take this opportunity to problematize even the notion that an object can usefully contain its own history. It’s a fetching, even an intoxicating idea, and you can easily see how all the ways in which such a thing might be desirable. But whose history are we talking about, exactly?

Nurri’s work on the New York Public Library’s African-American Migration Experience project provides us with a nice capsule illustration of some of the problems involved when an item is recursively accompanied by descriptive information as it travels down through time. One of her responsibilities at the Digital Library is verifying that archival images have accurate metadata, fields describing the contents of an image.

Imagine that she’s come across a picture from 1920’s Strivers’ Row, with a scrawled annotation on the back of it: “Some prominent local Negroes.” (This is not at all an atypical example.) An accurate provision of metadata, of course, requires transcribing the contemporaneous description word for word. But obviously, “prominent Negroes” is not going to fly as an object descriptor in 2004 — and nor should it, less from any feeling of political correctness (though there is that) than from the simple reason that few in 2004 are likely to search a database using the keyword “negroes,” unless it’s in a context like “Negro League baseball.”

And here the infinite regress beckons. Say you append both contemporary and historical tags to the image: “Images – Harlem – African-Americans” and “Caption – 1927 – ‘Some prominent local Negroes'”. You may have covered the obvious bases, but that’s nothing like a full history. To ensure the full understanding of someone arriving at the object from some context external in space, time, or both, you would also have to include information about the evolution of the English language and the society in which it’s used, just to explain why the 1927 label wasn’t considered appropriate a mere seventy-five years later. You see where this is going? (Sterling himself points out that “[o]nce we tag many things, we will find that there is no good place to stop tagging.”)

Sure, memory is cheap, and will be cheaper. It’s not storing such a bottomless effusion of autodescription that I’m concerned about. It’s how useful this metadata will be, any of it, when its reliability will be hard to gauge – when different parts of an object’s record, introduced at different junctures in “space and time,” may well have differing degrees of reliability, and little way to distinguish between them!

We know from the Web and from various p2p applications that, in the wild, metadata is close to useless because it can be gamed so easily; as a result, no credible search engine relies on it nor has done so for years. Is that really the new Metallica single, or is it five minutes of Lars Ulrich telling you to go fuck yourself? Is that really a captive about to be decapitated by Islamists, or is it a commercial for a crappy movie you never would have clicked on had it represented itself honestly? (Who has the authority to append metadata? Who has the responsibility, or even the technical wherewithal, to verify it?) I’m surprised that someone as savvy as Bruce doesn’t seem to grasp the implications of this for spime.

unspiming

I believe, with Bruce Sterling, that some watershed is fast approaching, past which ordinary objects will be endowed with such information-sensing ( -processing, -storage, -synthesis and -retransmission) power that both the way we understand them and the very language with which we refer to them will need to change.

Where I part ways with him, however, is in my belief that we don’t have to meekly bend over and try to “cope with” the negative consequences of any such development. As Lawrence Lessig rightly reminds us, in the destiny of any designed system, some possibilities are locked in, and others forestalled, at the level of architecture. And fortunately for all of us, when asked to submit to regimes of antihuman banality, some designers have historically had other ideas.

I can do little more than hope that this will always be the case: that those people endowed with the ability to see what’s coming over the horizon not merely describe what reaches their senses, but actively intervene to forestall the worst contingencies arising. Such an undertaking requires care and insight and discretion beyond that which we ordinarily display — myself as much as anyone else — but I firmly believe that we can choose our futures rather than have them imposed on us. In this season of decision, it is clear that in more ways than one, such a moment is now upon us.

If you want a closer look at the “spime metadata” I ginned up to serve as an illustration of this piece, it’s downloadable as a PDF here. It’s intended to represent the self-description (at time of first consumer purchase) of a notional Nike-brand t-shirt.

People are creative; industries, not so much. And cities?

So it looks like I’ll be in Amsterdam next month to speak at WCIT 2010: the seventeenth annual World Congress on Information Technology, an event whose theme is “Challenges of Change.” (Lot of challenges this year, I guess, and that’s even before your civilizational transportation grid is brought to its knees by the merest grumblings of an Icelandic firegod.)

I am of course delighted to be at WCIT, but I have to say I’m a little perplexed by the relevance of anything I have to say to the track I’ve been assigned, “Creative Industries.” People I have a great deal of respect for have found institutional homes in departments so named, so there must be some there there, but for the life of me I can’t figure out why a rubric so fuzzy and problematic has risen to prominence so quickly.

Actually, I find the recent emphasis on “creative” X, Y and Z more than a little troubling. Part of this is simply a lifelong aversion to flavor-of-the-month thinking and empty jargon, but it’s also that it all seems to be down to the influence of Richard Florida — and in my mind, Florida’s seeming advocacy of things I care about deeply winds up trivializing and ultimately undercutting them.

Methodologically, of course, Florida’s original work leaves a great deal to be desired, so much so that the serious social scientists I know preemptively cringe when they can sense his name about to be uttered. The problems start right off the bat, with Florida’s definition of “creative”; in his hands, the term becomes so elastic as to be effectively meaningless, unless you truly believe that surgeons, hairdressers and cabinetmakers are all responding to the same primary imperatives in their choice of occupation.

But then it’s not clear that even if they did, they would think of themselves as a self-conscious class — i.e. a group with overriding shared or collective interests — at all. The sprawling cohort Florida anoints as creative for the purposes of making his case have so little in common otherwise that it’s hard to ever imagine them constituting a coherent constituency, voting bloc, market or audience.

I also wish somebody would tell me just which fields of human endeavor constitute these supposed “creative industries.” The laundry list of criteria that have been advanced strikes me as more self-congratulatory than diagnostically useful, and just about Borgesian into the bargain.

The error is compounded when some well-meaning effort is made to attract both class and industries to what are now being dubbed “creative cities.” Believe me, I have absolutely no problem if you want to attract creative people to your city, nor would I complain in the slightest if you rigged the machinery of municipal policy so as to render your part of the world that much more welcoming to gay men and bicyclists. We could all use a leisurely ride every once in awhile, and so far as I know no city has ever done anything but make money and have a good time during an International Bear Rendezvous. That is all well and good.

But don’t for a moment make the mistake that by so doing, you’ll automatically become Silicon Valley 2.0, let alone catapult your two-bit burg into the stratum of Sassen-class world cities. Convincing the startups, the venture money, and the young innovators that your part of the world would make a congenial home, in the hopes of cultivating a robust and sustainable tax base, is a perfectly reasonable thing to want to do. But the honest truth is that not every place is or ever will be equally set up to succeed in these things, and anybody who suggests otherwise is selling you a bill of goods.

The cynic (or the realist critic of neoliberalism) points out that investment is attracted by a “stable” local political environment and a docilized labor market contained by business-friendly wage and collective-bargaining laws. The Floridian, ever so slightly more evolved, will argue that sidewalk cafés, plentiful bike parking, and a neighborhood that breaks out in fluttering rainbow bunting come Pride each year are more likely to attract the clean, green twenty-first century investment you’re presumably really looking for. Better to snare Jamba Juice and the Apple Store and the kind of people who shop in them, goes the argument, than Pig Iron Smelting Joint Venture No. 4.

That’s all fine, as far as it goes. But I believe there’s a single factor that makes one or another region more attractive to the kinds of people and investment that apparently now signify above all others — and I’m sorry, Metz, it’s not having a starchitect-designed museum. It’s a factor I think of as organic sense of place.

Amsterdam, Barcelona, San Francisco, New York and London all have persistent local ways of doing and being, and that’s what makes them compelling places to work and settle, despite the inevitable hassles attendant upon doing so. These lifeways obviously evolved over historical time, and the harsh truth we can conclude from this is that there’s no turnkey way to join their ranks, no book you can read or seminar you can attend that can tell you how to be one of them. This has got to be a bitter pill to swallow, I know, if you’re Masdar or Sejong City.

I understand that times are tough, competition between cities is relentless and those of you responsible for making urban-scale decisions are desperately interested to hear from someone, anyone at all, who seems confident about having the answers. I’m simply begging you not to swallow Richard Florida’s ideas whole (or mine, or anybody else’s at all).

If you care about queer lives and two-wheeled transit, by all means take measures to support them. But do so on their own terms, in, of and for themselves, and not because you’re following some pop sociologist’s half-assed recipe for urban renaissance in the hope of luring development. Who knows, maybe a sincere effort at the former will wind up fructifying your town in all kinds of unexpected ways; it’s not as if it’s ever a particularly bad idea to underwrite civilization and amenity.

But if all you care about in the end is the flow of investment, talent and human capital through your town, you can probably save yourself the half-hearted effort at draping yourself with the Creative Industries mantle. There are plenty of other ways to attract capital, and though they’re neither as glamorous nor as generative of the instant cred that goes hand-in-hand with having purchased this year’s model, they work and work reliably.

I’ve never heard anyone accuse Zürich, for example, of having a blistering DJ scene, cutting-edge galleries or forward-leaning popup shops. Yet they seem to be doing OK when it comes to the cheddar, you know? Better a world of places that are what they are, and stand or fall on their own terms, than the big nowhere of ten thousand certified-Creative towns and cities with me-too museums, starchitected event spaces and half-hearted film festivals.

Ultramapping

As far as I can tell, the phrase “ultra mapping” originates with this Wired UK article on our friends at Stamen and their recent activities in London. It sure sounds like a Stamenism, and it beautifully describes what they do. But as I turn the phrase over in my mouth (and in so doing, decide it really ought to be a single word), I’ve come to think it actually refers to something even bigger than that, a truly epochal change that we and our maps are living through together.

I’m sure cartographers and people in the geography community have been all over this, possibly even in so many words, but I believe we’re right now experiencing the most significant single evolution in mapping since someone first scratched plans on papyrus. One relatively recent and very simple intervention, made possible by the lamination together of three or four different kinds of technology, has completely changed what a map is, what it means, what we can do with it.

It’s this: that for the very first time in human history, our maps tell us where we are on them.

The fact that such depictions can now also render layers of dynamic, real-time situational information seems almost incidental to me compared to this. This one development subtly but decisively removes the locative artifacts we use from the order of abstraction. By finding ourselves situated on the plane of a given map, we’re being presented with the implication that this document is less a diagram and more a direct representation of reality — and, what’s more, one with a certain degree of fidelity, one that can be verified empirically by the simple act of walking around. How is that not epochal?

I’d argue that this begins to color our experience of all maps, even those that remain purely imaginary. We begin to look for the pulsing crosshairs or the shiny, cartoony pushpin that say YOU ARE HERE. The ability to locate oneself becomes bound up with the meaning of any representation of space whatsoever.

Now bring all the Stameny goodness implied by dynamic visualization back into the picture: all those routinely gorgeous renderings of subway ridership or crime or air quality imply something very different when you can either find yourself within their ambit or cannot. At its rawest, the suggestion is this: either these issues affect me, or they do not. And this is true even if what is being mapped is a purely historical event. The implication is there, however faint.

I’ve been a map fan all my life. I must have spent literally hundred of hours poring over various representations of place real and imagined, from the AAA TripTiks and Guides Michelin that used to litter the family car, to the Middle-Earth and Ringworld charts that so awed me when I was nine (“contour interval violated on Fist-of-God”), to the land-navigation block of the Army’s Primary Leadership Development Course (repeat after me: “a line drawing, to scale, of a portion of the Earth’s surface as seen from above”).

Nothing in all that, though, prepared me for the frisson of holding an iPhone in my hand for the first time, launching Google Maps, pressing a single button…and being located, told where I was to within a couple of meters. It’s a real epistemic break, isn’t it? Those who come after us will have a hard time imagining that there was ever such a thing as a map that couldn’t do that.

“Responsibilization” and user experience

It’s a terrible word, but maybe a terrible thing deserves one: “responsibilization” refers to an institution disavowing responsibility for some function it used to provide, and displacing that responsibility onto its constituents, customers, or users. Pat O’Malley, in the SAGE Dictionary of Policing, provides as crisp a definition as I’ve found, and it’s worth quoting here in full:

…a term developed in the governmentality literature to refer to the process whereby subjects are rendered individually responsible for a task which previously would have been the duty of another – usually a state agency – or would not have been recognized as a responsibility at all. The process is strongly associated with neoliberal political discourses, where it takes on the implication that the subject being responsibilized [!] has avoided this duty or the responsibility has been taken away from them in the welfare-state era and managed by an expert or government agency.

Of course, it’s not just state agencies. It’s every half-stepping, outsourcing, rightsizing, refocusing-on-our-core-competency business you’ve encountered in these austere days, shedding any process or activity which cannot be reimagined as a profit center. You’ll get the taste of it any time you turn to a Web community to replace the documentation or customer service manufacturers used to provide as a matter of course. More generally, we see the slow spread of attitudes like this reflected in technological artifacts like the femtocells carriers want to sell you to patch the holes in their own network coverage and semiotic artifacts like the signage here, not-so-subtly normalizing the idea that checking in for a flight is something that should be accomplished without recourse to expensive, troublesome human staff.

In both of these cases, a rhetorical sleight-of-hand is deployed to reframe the burden you must now shoulder as an opportunity – to convince you, to trot out once again a phrase that is rapidly outstaying its welcome, that what you are experiencing is a feature and not a bug. And this is the often-unacknowledged downside in the otherwise felicitous turn toward more open-ended product-service ecosystems: the price of that openness is generally increased vigilance and care on the user’s part, or “wrangling.” But there’s a stark difference, as I read it anyway, between knowingly taking on that order of obligation in the name of self-empowerment and improved choice, and having to take it on because the thing you’ve just shelled out a few hundred dollars for is an inert brick if you don’t.

I’m not sure there’s any longterm fix for this tendency in a world bracketed by the needs of institutions driven primarily by analyst calls, quarterly earnings estimates and shareholder fanservice on one flank, and deeply seamful technologies on the other. The pressures all operate in one direction: you’re the one left having to pick up a sandwich before your five-hour flight, figure out what on earth a “self-assigned IP address” means, and help moribund companies “innovate” their way out of a paper bag, for free. So if you manage an organization, of whatever size or kind, that’s in the position of having to do this to your users or customers, you definitely have the zeitgeist defense going for you. But at least have the common decency not to piss on people’s heads and tell them it’s raining.

There’s more on such “boundary shifts” here, and I’ll be writing much more about their consequences for the user experience over the next few months. For now, it’s enough to identify the tendency…and maybe begin to think about a more euphonious name for it, as well.

A note to my French friends

Bonjour, my esteemed Francophone friends, et bienvenue à mon site Web! Please indulge me: I have one small comment I’d like to share with you this morning, and I hope it doesn’t cause you undue dismay.

My name is not, actually, “Adam Greenfiels,” “Adam GreenField,” or “Adam Greensfield.”

I have some sympathy, of course, for the subtle torques and distortions that inevitably enter the act of nomenclature when non-Roman names are transcribed into Roman scripts, and vice versa: “누리” is not precisely “Nurri,” and you just live with that. (You don’t have to be a hard Sapir-Whorfian to understand that those are two different people, and that someone moving between two cultures with any degree of regularity is forced to live in the space between.)

I’m not all that offended; the bizarre intercap rendering, especially, amuses me. But I do think it’s kind of an elementary – almost a universal – courtesy to refer to someone as they refer to themselves, and I wouldn’t have imagined that such otherwise-worldly interlocutors as yourselves would have this hard a time rendering a plain Anglo-Saxon name like mine in another language founded on the same scriptural assets.

I mean, I don’t refer to Serge Gainbourg, Nicolas SarKozy, Catherine De-Neuve, right? Michel Houellebecqs? Michel Foucaux? (I could do this all day.)

Greenfield. Greenfield. Greenfield. Learn it…know it…love it! Merci…et bonne journée.

Of books and unbooks

I’m not sure precisely what’s driving it – maybe it’s the bracing, clarifying, liberatory aspect of a severe economic downtown – but I sense an absolutely titanic percolation of creativity out there in the world just now.

Each day seems to bring word of another genuinely good new idea or way of framing things, something truly worth reckoning with. I’m frankly jealous that so much of this is going on at a moment when – ironically enough, and for the first time in a decade – I’m mired inside the kind of structure that doesn’t lend itself to such investigations, but the optimist in me hopes there may still be one or two ways to contribute to and participate in what is shaping up to be a great fructification.

The unbook is exemplary of the kind of ideas this moment in our lives seems to be turning up. It’s clearly come steam-engine time, in that a bunch of different people (with predictably diverse instincts and agendas) have been converging on this idea for a while now, but I give props to Jay Cross and Dave Gray for naming the idea and therefore giving it immediacy. Sometimes, as I of all people know, tagging something with a gimmicky name is just what needs to happen before the idea at its core can assume concrete form in people’s minds.

To my mind, anyway, the unbook is a container for long-form ideas appropriate to an internetworked age. By building on some admittedly dorky but highly useful tropes of software, mostly having to do with version control, open-endedness and an explicit role for the “user” community, the notion allows such works to usefully harness the dynamic and responsive nature of discourse on the Web, while preserving coherence, authorial voice and intent.

This is precisely what Nurri and I have always had in mind for The City Is Here For You To Use, which is shaping up as something of an unbook avant la lettre. It’s why we’ve always insisted on keeping you in the loop as to the book’s fitful progress, it’s why I take every opportunity to test its ideas here, it’s why I make explicit the fact that your response to those ideas is crucial to their evolution and expression. And it’s why, even though the process is inevitably going to result in a static, physical document as one of its manifestations – and hopefully a very nice one indeed – we’ve committed to offering a free and freely-downloadable Creative Commons-licensed PDF of every numbered version of The City, from zero onward. You buy the book if you want the object. The ideas are free.

The important part is in acknowledging two points which have usually been understood as contradictory, but which are actually nothing of the sort: firstly, that the expression of ideas in written form has something to learn from the practices that have evolved around the collaborative creation of dynamic, digital documents over the half-century-long history of software; and secondly, that certain ideas require elaboration in the reasonably strongly-bounded form we know as a “book,” and cannot meaningfully be shared otherwise. A third point, concomitant to the second, is that despite recent technical advances, screen-based media still cannot, and may not ever fully be able to, deliver the extratextual cues and phenomenological traces that support, inform and extend the meaning of written documents. (Cory, I love you, but I’ve heard you discount these very real pleasures more than once. Don’t you know you can have your cake and eat it too?)

Well. As Dave Gray points out, “An unbook’s community is a very real part of the unbook’s development team.” I wouldn’t necessarily have used the phrase “development team,” for the obvious reasons, but the point stands. Your voice is a part of this book we’re writing, and not the least significant. What do you think?

A BRIC to the head

There are certain terms in use in the English language that cannot help but reveal the person uttering them to be guilty of lazy thinking at the very least, and perhaps even an outright idiot.

If such words were merely flags for dullness, I’d almost look kindly upon their use, because this would perform a useful hygienic function – you’d simply discount, forever after, the opinions of a known user. Habitual offenders could be ignored entirely, with profit.

But it’s worse than that. These words are antimatter to clarity of insight, or more accurately, some malignant linguistic equivalent of ice-nine: to drop one of them into a sentence is not merely to cast doubt on the acuity of one’s own mental processes, it’s to poison the entire discussion that follows and therefore includes the term by reference.

Contemporary business culture has, of course, given rise to a great many such terms and expressions (“net-net,” “rightshoring” and “-sizing,” and most especially “out of the box”), and while I know I should probably be more generous and tolerant, let’s be honest: I tend to flip the bozo bit, more or less permanently, on anyone who lets one of these slip in my presence.

Yeah, I do tend to find “drink[ing] the Kool-Aid” inherently and offensively trivializing. And don’t get me started about “learnings.” But the one that really sticks in my craw of late is a term of art so insultingly reductionist, so collapsingly awful, that it seems only a matter of time until it’s enshrined in the official name of a strategy document or (better yet) department.

This term is “the BRIC countries,” “BRIC” being a handy, one-syllable acronym used to refer to the two point eight billion citizens of Brazil, Russia, India and the People’s Republic of China, and strongly implying that usefully robust commonalities of market conditions, sociometrics or (!) user behavior can be observed among and between them. (Usage: “How are we going to sell this in the BRICs?”)

Remember: the complaint is not just that that using “BRIC” makes you sound like a tool, although that is certainly the case. And it’s not even that, unless you happen to be participating in a discussion of projective macroeconomics, the abbreviation actually and measurably makes you dumber for using it. It’s that the term kills thought dead in everything it touches.

In the contexts in which I tend to encounter it, “BRIC” amounts to a not particularly timid suggestion that the behaviors, horizons and ambitions of a street kid in Kolkata, a EVP for Sales in São Paulo, a nursing student in Chengdu and a taxi driver in Vladivostok are more or less interchangeable. While I know full well that you’d never even want to imply that, why be a party to discussions that do?

Look, I’m not gonna get all “Politics and the English Language” on you – ‘twould be a tad hypocritical – but let us at least agree that this nasty little word is quite literally stupefying. If you just happen to work for, say, the World Bank, and the moment demands of you some handy rubric under which to gather these four specific economies – in this case, and this case alone, you can be forgiven. Otherwise…you know what to do.

“Information architecture”

So it looks like I’m going to be keynoting the EuroIA conference in Amsterdam in September, and therefore (and not without some irony) re-engaging with the information architecture community I so vocally left behind in the fall of 2006.

Since I’m doing this literally the day after a talk at Picnic ’08, there had been some concern regarding overlap and/or overexposure on the part of the organizers. Which is not entirely unreasonable: while the audiences are separate and distinct, this is after all going to be the third time this year I will have spoken in that fairest of cities, and at some point it’s hard to argue that you’re offering people something truly worth their investment of time.

So I think it’s only fair of me to bring the IA’s some entirely new material – something that builds on the Everyware and The City Is Here work I’ve been doing for the last four years, that is also engineered from the ground up for that specific audience.

I thought I might start with a comment from last September’s LIFT event in Seoul that kind of caught me by surprise: Bruce’s characterization of the person orchestrating a fabject‘s transition into the actual as an “information architect.” I laughed at the time – I daresay that upward of 90% of the people who think of themselves as information architects have never heard the word “fabject” – but I’d also be willing to grant that those two words actually constitute an apposite term for the task and mindset any such endeavor would involve.

I was reminded of this again this morning, as I was sipping Kona and reading Archinect‘s wonderful roundup of “Design and the Elastic Mind” reviews, in which Fred Scharmen makes the astute and very timely observation that

to organize and present information on a sufficiently large and complex scale is to perform a task commensurate with the orchestration and coordination necessary to construct a built space.

And lo! he drops the unutterable words, arguing that they point at

…the existence of another, more all-encompassing way of working and making: a yet-un-named field that comprises the kind of systems-level thinking that architecture itself might be a subset of.

And I find myself kind of nodding my head, y’know? I’m in broad agreement with Fred that this is a very natural way to understand and communicate what’s bound up in this critical twenty-first century domain of practice, and everything involved in shepherding (spatial, experiential and social) artifacts between their virtual and actual manifestations.

The challenge, of course, is that this is information architecture as virtually none of the people practicing that endeavor today understand it.

Now the very last thing I want to do is reactivate any of the dull, passionate, and ultimately pointless nomenclatural debates that roiled IA circa 1999-2000. I do think it would be of interest, though, to present the idea to the people currently practicing something they think of as information architecture that a phase transition may be about to unfold across their field of practice, success in which will demand the exercise of skills and orientations that are currently external to their worldview. I dunno: what do you think?