The following is the draft of a section from my forthcoming book, The City Is Here For You To Use, concerning various ways in which networked devices are used to furnish the mobile pedestrian with a layer of location-specific information superimposed onto the forward view — “augmented reality,” in other words. (The context is an extended discussion of four modes in which information is returned from the global network to the world so it may be engaged, considered and acted upon, which is why the bit here starts in medias res.)
As you see it here, the section is not quite in its final form; it hasn’t yet been edited for meter, euphony or flow, and in particular, some of the arguments toward the end remain too telescoped to really stand up to much inspection. Nevertheless, given the speed at which wearable AR is evolving, I thought it would be better to get this out now as-is, to garner your comments and be strengthened by them. I hope you enjoy it.
One seemingly potent way of returning networked information to the world would be if we could layer it directly over that which we perceive. This is the premise of so-called augmented reality, or AR, which proposes to furnish users with some order of knowledge about the world and the objects in it, via an overlay of informational graphics superimposed on the visual field. In principle, this augmentation is agnostic as to the mediating artifact involved, which could be the screen of a phone or tablet, a vehicle’s windshield, or, as Google’s Glass suggests, a lightweight, face-mounted reticle.
AR has its conceptual roots in informational displays developed for military pilots in the early 1960s, at the point when the performance of enemy fighter aircraft began to overwhelm a human pilot’s ability to react. In the fraught regime of jet-age dogfighting, even a momentary dip of the eyes to a dashboard-mounted instrument cluster could mean disaster. The solution was to project information about altitude, airspeed and the status of weapons and other critical aircraft systems onto a transparent pane aligned with the field of vision, a “head-up display.”
This notion turned to have applicability in fields beyond aerial combat, where the issue wasn’t so much reaction time as it was visual complexity. One early AR system was intended to help engineers make sense of the gutty tangle of hydraulic lines, wiring and control mechanisms in the fuselage of an airliner under construction; each component in the otherwise-hopeless confusion was overlaid with a visual tag identifying it by name, and colored according to the system it belonged to.
Other systems were designed to help people manage situations in which both time and the complexity of the environment were sources of pressure — for example, to aid first responders in dispelling the fog and chaos they’re confronted with upon arrival at the scene of an emergency. One prototype furnished firefighters with visors onto which structural diagrams of a burning building were projected, along with symbols indicating egress routes, the position of other emergency personnel, and the presence of electric wiring or other potentially dangerous infrastructural elements.
The necessity of integrating what were then relatively crude and heavy cameras, motion sensors and projectors into a comfortably wearable package limited the success of these early efforts — and this is to say nothing of the challenges posed by the difficulty of establishing a reliable network connection to a mobile unit. But the conceptual heavy lifting done to support these initial forays produced a readymade discourse, waiting for the day augmentation might be reinstantiated in smaller, lighter, more capable hardware.
That is a point we appear to have arrived at with the advent of the smartphone. As we’ve seen, the smartphone handset can be thought of as a lamination together of several different sensing and presentation technologies, subsets of which can be recombined with one another to produce distinctly different ways of engaging networked information. Bundle a camera, accelerometer/gyroscope, and display screen in a single networked handset, and what you have in your hands is indeed an artifact capable of sustaining rudimentary augmentation. Add GPS functionality and a three-dimensional model of the world — either maintained onboard the device, or resident in the cloud — and a viewer can be offered location-specific information, registered with and mapped onto the surrounding urban fabric.
In essence, phone-based AR treats the handset like the transparent pane of a cockpit head-up display: you hold it before you, its camera captures the forward-facing view, and this is rendered on the screen transparently but for whatever overlay of information is applied. Turn and the on-screen view turns with you, tracked (after a momentary stutter) by the grid of overlaid graphics. And those graphics can provide anything the network can: identification, annotation, direction or commentary.
It’s not hard to see why developers and enthusiasts might jump at this potential, even given the sharp limits imposed by the phone as platform. We move through the world and we act in it, but the knowledge we base our movements and actions on is always starkly less than what it might be. And we pay the price for this daily, in increments of waste, frustration, exhaustion and missed opportunity. By contrast, the notion that everything the network knows might be brought to bear on someone or -thing standing before us, directly there, directly present, available to anyone with the wherewithal to sign a two-year smartphone contract and download an app — this is a deeply seductive idea. It offers the same aura of omnipotence, that same frisson of godlike power evoked by our new ability to gather, sift and make meaning of the traces of urban activity, here positioned as a direct extension of our own senses.
Why not take advantage of this capability? After all, the richness and complexity of city life confronts us with any number of occasions on which the human sensorium could do with a little help.
Let a few hundred neurons in the middle fusiform gyrus of the brain’s right hemisphere be damaged, or fail to develop properly in the first place, and the result is a disorder called prosopagnosia, more commonly known as faceblindness. As the name suggests, the condition deprives its victims of the ability to recognize faces and associate them with individuals; at the limit, someone suffering with a severe case may be entirely unable to remember what his or her loved ones look like. So central is the ability to recognize others to human socialization, though, that even far milder cases cause significant problems.
Sadly, this is something I can attest to from firsthand experience. Like an estimated 2.5% of the population, I suffer from the condition, and even in the relatively attenuated form I’m saddled with, my broad inability to recognize people has caused more than a few experiences of excruciating awkwardness. At least once or twice a month I run into people on the street who clearly have some degree of familiarity with me, and find myself unable to come up with even a vague idea of who they might be; I’ll introduce myself to a woman at a party, only to have her remind me (rather waspishly, but who can blame her) that we’d worked together on a months-long project. Deprived of contextual cues — the time and location at which I usually meet someone, a distinctive hairstyle or mode of dress — I generally find myself no more able to recognize former colleagues or students than I can complete strangers. And as uncomfortable as this can be for me, I can only imagine how humiliating it is for the person on the other end of the encounter.
I long ago lost track of the number of times in my life at which I would have been grateful for some subtle intercessionary agent: something that might drop a glowing outline over the face of someone approaching me and remind me of his or her name, the occasion on which we met last, maybe even what we talked about on that occasion. It would spare both of us from mortification, and shield my counterpart from the inadvertent but real insult implied by my failure to recognize them. So the ambition of using AR in this role is lovely — precisely the kind of sensitive technical deployment I believe in, where technology is used to lower the barriers to socialization, and reduce or eliminate the awkwardnesses that might otherwise prevent us from better knowing one another.
But it’s hard to imagine any such thing being accomplished by the act of holding a phone up in front of my face, between us, forcing you to wait first for me to do so and then for the entire chain of technical events that must follow in order to fulfill the aim at the heart of the scenario. The device must acquire an image of your face with the camera, establish the parameters of that face from the image, and upload those parameters to the cloud via the fastest available connection, so they may be compared with a database of facial measurements belonging to known individuals; if a match is found, the corresponding profile must be located, and the appropriate information from that profile piped back down the connection so it may be displayed as an overlay on the screen image.
Too many articulated parts are involved in this interaction, too many dependencies — not least of which is the coöperation of a Facebook, a Google, or some other enterprise with a reasonably robust database of facial biometrics, and that is of course wildly problematic for other reasons. Better I should have confessed my confusion to you in the first place.
Perhaps a less technologically-intensive scenario would be better suited to the phone as platform for augmentation? How about helping a user find their way around the transit system, amidst all the involutions of the urban labyrinth?
Here we can weigh the merits of the use case by considering an actual, shipping product, Acrossair’s Nearest Subway app for the iPhone, first released in 2010. Like its siblings for London and Paris, Nearest Tube and Nearest Metro, Nearest Subway uses open location data made available by the city’s transit authority to specify the positions of transit stops in three-dimensional space. On launch, the app loads a hovering scrim of simple black tiles featuring the name of each station, and icons of the lines that serve it; the tiles representing more distant stations are stacked atop those that are closer. Rotate, and the scrim of tiles rotates with you. Whichever way you face, you’ll see a tile representing the nearest subway station in the direction of view, so long as some outpost of the transit network lies along that bearing in the first place.
Nearest Subway is among the more aesthetically appealing phone-based AR applications, eschewing junk graphics for simple, text-based captions sensitively tuned to the conventions of each city’s transit system. If nothing else, it certainly does what it says on the tin. It is, however, almost completely worthless as a practical aid to urban navigation.
When aimed to align with the Manhattan street grid from the corner of 30th Street and First Avenue, Nearest Subway indicates that the 21st Street G stop in Long Island City is the closest subway station, at a distance of 1.4 miles in a north-northeasterly direction.
As it happens, there are a few problems with this. For starters, from this position the Vernon Boulevard-Jackson Avenue stop on the 7 line is 334 meters, or roughly four New York City blocks, closer than 21st Street, but it doesn’t appear as an option. This is either an exposure of some underlying lacuna in the transit authority’s database — unlikely, but as anyone familiar with the MTA understands implicitly, well within the bounds of possibility — or more probably a failure on Acrossair’s part to write code that retrieves these coordinates properly.
Just as problematically, the claimed bearing is roughly 55 degrees off. If, as will tend to be the case in Manhattan, you align yourself with the street grid, a phone aimed directly uptown will be oriented at 27 degrees east of due north, at which point Nearest Subway suggests that the 21st Street station is directly ahead of you. But it actually lies on an azimuth of 82 degrees; if you took the app at its word, you’d be walking uptown a long time before you hit anything even resembling a subway station. This is most likely to be a calibration error with the iPhone’s compass, but fairly or otherwise Nearest Subway shoulders the greater part of the blame here — as anyone familiar with computational systems has understood since the time of Babbage, if you put garbage in, you’ll get garbage out.
Furthermore, since by design the app only displays those stations roughly aligned with your field of vision, there’s no way for it to notify you that the nearest station may be directly behind your back. Unless you want to rotate a full 360 degrees, then, and make yourself look like a complete idiot in the process, the most practical way to use Nearest Subway is to aim the phone directly down, which makes a reasonably useful ring of directional arrows and distances pop up. (These, of course, could have been superimposed on a conventional map in the first place, without undertaking the effort of capturing the camera image and augmenting it with a hovering overlay of theoretically compass-calibrated information.)
However unfortunate these stumbles may be, they can all be resolved, addressed with tighter code, an improved user interface or a better bearing-determination algorithm. Acrossair could fix them all, though — enter every last issue in a bug tracker, and knock them down one by one — and that still wouldn’t address the primary idiocy of urban AR in this mode: from 30th Street and First Avenue, the 21st Street G stop is across the East River. You need to take a subway to get there in the first place. However aesthetically pleasing an interface may be, using it to find the closest station as the crow flies does you less than no good when you’re separated from it by a thousand meters of water.
Finally, Nearest Subway betrays a root-level misunderstanding of the relationship between a citydweller and a transportation network. In New York City, as in every other city with a complex underground transit system, you almost never find yourself in a situation where you need to find the station that’s nearest in absolute terms to begin with; it’s far more useful to find the nearest station on a line that gets you where you want to go. Even at the cost of cluttering what’s on the screen, then, the very first thing the would-be navigator of the subway system needs is a way to filter the options before them by line.
I raise these points not to park all of the blame at Acrossair’s door, but to suggest that AR itself is badly unsuited to this role, at least when handled in this particular way. It takes less time to load and use a map than it does to retrieve the same information from an augmentive application, and the map provides a great deal more of the context so necessary to orienting yourself in the city. At this point in technological evolution, then, more conventional interface styles will tend to furnish a user with relevant information more efficiently, with less of the latency, error and cruft that inevitably seem to attend the attempt to superimpose it over the field of vision.
If phone-based augmentation performs poorly as social lubricant or aid to urban navigation, what about another role frequently proposed for AR, especially by advocates in the cultural heritage sector? This use case hinges on the argument that by superimposing images or other vestiges of the past of a place directly over its present, AR effectively endows its users with the ability to see through time.
This might not make much sense at all in Songdo, or Masdar, or any of the other new cities now being built from scratch on greenfield sites. But anyone who lives in a place old enough to have felt the passage of centuries knows that history can all too easily be forgotten by the stones of the city. Whatever perturbations from historical events may still be propagating through the various flows of people, matter, energy and information that make a place, they certainly aren’t evident to casual inspection. An augmented view returning the layered past to the present, in such a way as to color our understanding of the things all around us, might certainly prove to be more emotionally resonant than any conventional monument.
Byzantium, old Edo, Roman Londinium, even New Amsterdam: each of these historical sites is rife with traces we might wish to surface in the city occupying the same land at present. Locales overwhelmed by more recent waves of colonization, gentrification or redevelopment, too, offer us potent lenses through which to consider our moment in time. It would surely be instructive to retrieve some record of the jazz- and espresso-driven Soho of the 1950s and layer it over what stands there at present; the same goes for the South Bronx of 1975. But traversed as it was during the twentieth century by multiple, high-intensity crosscurrents of history, Berlin may present the ultimate terrain on which to contemplate recuperation of the past.
This is a place where pain, guilt and a sense of responsibility contend with the simple desire to get on with things; no city I’m familiar with is more obsessively dedicated to the search for a tenable balance between memory and forgetting. The very core of contemporary Berlin is given over to a series of puissant absences and artificially-sustained presences, from the ruins of Gestapo headquarters, now maintained as a museum called Topography of Terror, to the remnants of Checkpoint Charlie. A long walk to the east out leafy Karl-Marx-Allee — Stalinallee, between 1949 and 1961 — takes you to the headquarters of the Stasi, the feared secret police of the former East Germany, also open to the public as a museum. But there’s nowhere in Berlin where the curious cost of remembering can be more keenly felt than in the field of 2,711 concrete slabs at the corner of Ebertstrasse and Hannah-Arendt-Strasse. This is the Memorial to the Murdered Jews of Europe, devised by architect Peter Eisenman, with early conceptual help from the sculptor Richard Serra.
Formally, the grim array is the best thing Eisenman has ever set his hand to, very nearly redemptive of a career dedicated to the elevation of fatuous theory over aesthetic coherence; perhaps it’s the Serra influence. But as a site of memory, the Monument leaves a great deal to be desired. It’s what Michel Foucault called a heterotopia: something set apart from the ordinary operations of the city, physically and semantically, a place of such ponderous gravity that visitors don’t quite know what to make of it. On my most recent visit, the canyons between the slabs rang with the laughter of French schoolchildren on a field trip; the children giggled and flirted and shouted to one another as they leapt between the stones, and whatever the designer’s intent may have been, any mood of elegy or commemoration was impossible to establish, let alone maintain.
Roughly two miles to the northeast, on the sidewalk in front of a doner stand in Mitte, is a memorial of quite a different sort. Glance down, and you’ll see the following words, inscribed into three brass cubes set side by side by side between the cobblestones:
Ermordet in Auschwitz: that is, on specific dates in November of 1942 and March of the next year, the named people living at this address were taken across this very sidewalk and forcibly transported hundreds of miles east by the machinery of their own government, to a country they’d never known and a facility expressly designed to murder them. The looming façades around you were the last thing they ever saw as free people.
It’s in the dissonance between the everyday bustle of Mitte and these implacable facts that the true horror resides — and that’s precisely what makes the brass cubes a true memorial, indescribably more effective than Eisenman’s. The brass cubes, it turns out, are Stolpersteine, or “stumbling blocks,” a project of artist Gunter Demnig; these are but three of what are now over 32,000 that Demnig has arranged to have placed in some 700 cities. The Stolpersteine force us to read this stretch of unremarkable sidewalk in two ways simultaneously: both as a place where ordinary people go placidly about their ordinary business, just as they did in 1942, and as one site of a world-historical, continental-scale ravening.
The stories etched in these stones are the kind of facts about a place that would seem to yield to a strategy of augmentation. The objection could certainly be raised that I found them so resonant precisely because I didn’t see them every day, and that their impact would very likely fade with constant exposure; we might call this the evil of banality. But being compelled to see and interpret the mundane things I did in these streets through the revenant past altered my consciousness, in ways subtler and longer-lasting than anything Eisenman’s sepulchral array of slabs was able to achieve. AR would merely make the metaphor literal — in fact, it’s easy for me to imagine the disorienting, decentering, dis-placing impact of having to engage the world through a soft rain of names, overlaid onto the very places from which their owners were stolen.
But once again, it’s hard to imagine this happening via the intercession of a handset. Nor are the qualities that make smartphone-based AR so catastrophically clumsy, in virtually every scenario of use, particularly likely to change over time.
The first is the nature of functionality on the smartphone. As we’ve seen, the smartphone is a platform on which each discrete mode of operation is engaged via a dedicated, single-purpose app. Any attempt at augmenting the environment, therefore, must be actively and consciously invoked, to the exclusion of other useful functionality. The phone, when used to provide such an overlay, cannot also and at the same time be used to send a message, look up an address, buy a cup of coffee, or do any of the other things we now routinely expect of it.
The second reservation is physical. Providing the user with a display surface for graphic annotation of the forward view simply isn’t what the handset was designed to do. It must be held before the eyes like a pane of glass in order for the augmented overlay to work as intended. It hardly needs to be pointed out that this gesture is not one particularly well-suited to the realities of urban experience. It has the doubly unappealing quality of announcing the user’s distraction and vulnerability to onlookers, while simultaneously ensuring that the device is held in the weak grip of the extended arm — a grasp from which it may be plucked with relative ease.
Taken together, these two impositions strongly undercut the primary ostensible virtue of an augmented view, which is its immediacy. The sole genuine justification for AR is the idea that information is simply there, copresent with that you’re already looking at and able to be assimilated without thought or effort.
That sense of effortlessness is precisely what an emerging class of wearable mediators aims to provide for its users. The first artifact of this class to reach consumers is Google’s Glass, which mounts a high-definition, forward-facing camera, a head-up reticle and the microphone required by the natural-language speech recognition interface on a lightweight aluminum frame. While Glass poses any number of aesthetic, practical and social concerns — all of which remain to be convincingly addressed, by Google or anyone else — it does at least give us a way to compare hands-free, head-mounted AR with the handset-based approach.
Would any of the three augmentation scenarios we explored be improved by moving the informational overlay from the phone to a wearable display?
A system designed to mitigate my prosopagnosia by recognizing faces for me would assuredly be vastly better when accessed via head-mounted interface; in fact, that’s the only scenario of technical intervention in relatively close-range interpersonal encounters that’s credible to me. The delay and physical awkwardness occasioned by having to hold a phone between us goes away, and while there would still be a noticeable saccade or visual stutter as I glanced up to read your details off my display, this might well be preferable to not being remembered at all.
That is, if we can tolerate the very significant threats to privacy involved, which only start with Google’s ownership of or access to the necessary biometric database. There’s also the question of their access to the pattern of my requests, and above all the one fact inescapably inherent to the scenario: that people are being identified as being present in a certain time and place, without any necessity whatsoever of securing consent on their part. By any standard, this is a great deal of risk to take on, all to lubricate social interactions for 2.5% of the population.
Nearest Subway, as is, wouldn’t be improved by presentation in the line of sight. Given what we’ve observed about the way people really use subway networks, information about the nearest station in a given direction wouldn’t be of any greater utility when splashed on a head-up display than it is on the screen of a phone. Whatever the shortcomings of this particular app, though, they probably don’t imply anything in particular about the overall viability of wearable AR in the role of urban navigation, and in many ways the technology does seem rather well-suited to the wayfinding challenges faced by the pedestrian.
Of the three scenarios considered here, though, it’s AR’s potential to offer novel perspectives on the past of a place that would be most likely to benefit from the wearable approach. We would quite literally see the quotidian environment through the lens of a history superimposed onto it. So equipped, we could more easily plumb the psychogeographical currents moving through a given locale, better understand how the uses of a place had changed over time, or hadn’t. And because this layer of information could be selectively surfaced — invoked and banished via voice command, toggled on or off at will — presenting information in this way might well circumvent the potential for banality through overfamiliarization that haunts even otherwise exemplary efforts like Demnig’s Stolpersteine.
And this suggests something about further potentially productive uses for augmentive mediators like Glass. After all, there are many kinds of information that may be germane to our interpretation of a place, yet effectively invisible to us, and historical context is just one of them. If our choices are shaped by dark currents of traffic and pricing, crime and conviviality, it’s easy to understand the appeal of any technology proposing that these dimensions of knowledge be brought to bear on that which is seen, whether singly or in combination. The risk of bodily harm, whatever its source, might be rendered as a red wash over the field of vision; point-by-point directions as a bright and unmistakable guideline reaching into the landscape. In fact any pattern of use and activity, so long as its traces were harvested by some data-gathering system and made available to the network, might be made manifest to us in this way.
Some proposed uses of mediation are more ambitious still, pushing past mere annotation of the forward view to the provision of truly novel modes of perception — for example, the ability to “see” radiation at wavelengths beyond the limits of human vision, or even to delete features of the visual environment perceived as undesirable. What, then, keeps wearable augmentation from being the ultimate way for networked citizens to receive and act on information?
The approach of practical, consumer-grade augmented reality confronts us with a interlocking series of concerns, ranging from the immediately practical to the existential.
A first set of reservations centers on the technical difficulties involved in the articulation of an acceptably high-quality augmentive experience. We’ve so far bypassed discussion of these so we could consider different aspects of the case for AR, but ultimately they’re not of a type that allows anyone to simply wave them away.
At its very core, the AR value proposition subsists in the idea that interactions with information presented in this way are supposed to feel “effortless,” but any such effortlessness would require the continuous (and continuously smooth) interfunctioning of a wild scatter of heterogeneous elements. In order to make good on this promise, a mediation apparatus would need to fuse all of the following elements: a sensitively-designed interface; the population of that interface with accurate, timely, meaningful and actionable information; and a robust, high-bandwidth connection to the networked assets furnishing that information from any point in the city, indoors or out. Even putting questions of interface design to the side, the technical infrastructure capable of delivering the other necessary elements reliably enough that the attempt at augmentation doesn’t constitute a practical and social hazard in its own right does not yet exist — not anywhere in North America, anyway, and not this year or next. The hard fact is that for a variety of reasons having to do with national spectrum policy, a lack of perceived business incentives for universal broadband connectivity, and other seemingly intractable circumstances, these issues are nowhere near being ironed out.
In the context of augmentation, as well, the truth value of representations made about the world acquires heightened significance. By superimposing information directly on its object, AR arrogates to itself a peculiar kind of claim to authority, a claim of a more aggressive sort than that implicit in other modes of representation, and therefore ought to be held to a higher standard of completeness and accuracy. As we saw with Nearest Subway, though, an overlay can only ever be as good as the data feeding it, and the augurs in this respect are not particularly reassuring. Right now, Google’s map of the commercial stretch nearest to my apartment building provides labels for only four of the seven storefront businesses on the block, one of which is inaccurately identified as a restaurant that closed many years ago. If even Google, with all the resources it has at its disposal, struggles to provide its users with a description of the streetscape that is both comprehensive and correct, how much more daunting will other actors find the same task?
Beyond this are the documented problems with visual misregistration and latency that are of over a decade’s standing, and have not been successfully addressed in that time — if anything, have only been exacerbated by the shift to consumer-grade hardware. At issue is the mediation device’s ability to track rapid motions of the head, and smoothly and accurately realign any graphic overlay mapped to the world; any delay in realignment of more than a few tens of milliseconds is conspicuous, and risks causing vertigo, nausea and problems with balance and coordination. The initial release of Glass, at least, wisely shies away from any attempt to superimpose such overlays, but the issue must be reckoned with at some point if useful augmentive navigational applications are ever to be developed.
Another set of concerns centers on the question of how long such a mediator might comfortably be worn, and what happens after it is taken off. This is of especial concern given the prospect that one or another form of wearable AR might become as prominent in the negotiation of everyday life as the smartphone itself. There is, of course, not much in the way of meaningful prognostication that can be made ahead of any mass adoption, but it’s not unreasonable to build our expectations on the few things we do know empirically.
Early users of Google’s Glass report disorientation upon removing the headset, after as few as fifteen minutes of use — a mild one, to be sure, and easily shaken off, from all accounts the sort of uneasy feeling that attends staring overlong at an optical illusion. If this represents the outer limit of discomfort experienced by users, it’s hard for me to believe that it would have much impact on either the desirability of the product or people’s ability to function after using it. But further hints as to the consequences of long-term use can be gleaned from the testimony of pioneering researcher Steve Mann, who has worn a succession of ever-lighter and more-capable mediation rigs all but continuously since the mid-1980s. And his experience would seem to warrant a certain degree of caution: Mann, in his own words, early on “developed a dependence on the apparatus,” and has found it difficult to function normally on the few occasions he has been forcibly prevented from accessing his array of devices.
When deprived of his set-up for even a short period of time, Mann experiences “profound nausea, dizziness and disorientation”; he can neither see clearly nor concentrate, and has difficulty with basic cognitive and motor tasks. He speculates that over many years, his neural wiring has adapted to the continuous flow of sensory information through his equipment, and this is not an entirely ridiculous thing to think. At this point, the network of processes that constitutes Steve Mann’s brain — that in some real albeit reductive sense constitutes Steve Mann — lives partially outside his skull.
The objection could be made that this is always already the case, for all of us — that some nontrivial part of everything that make us what we are lives outside of us, in the world, and that Mann’s situation is only different in that much of his outboard being subsists in a single, self-designed apparatus. But if anything, this makes the prospect of becoming physiologically habituated to something like Google Glass still more worrisome. It’s precisely because Mann developed and continues to manage his own mediation equipment that he can balance his dependency on it with the relative freedom of action enjoyed by someone who for the most part is able to determine the parameters under which that equipment operates.
If Steve Mann has become a radically hybridized consciousness, at least he has a legitimate claim to ownership and control over all of the places where that consciousness is instantiated. By contrast, all of the things a commercial product like Glass can do for the user rely on the ongoing provision of a service — and if there’s anything we know about services, it’s that they can be and are routinely discontinued at will, as the provider fails, changes hands, adopts a new business strategy or simply reprioritizes.
A final set of strictly practical concerns have to do with the collective experience of augmentation, or what implications our own choice to be mediated in this way might hold for the experience of others sharing the environment.
For all it may pretend to transparency, literally and metaphorically, any augmentive mediator by definition imposes itself between the wearer and the phenomenal world. This, of course, is by no means a quality unique to augmented reality. It’s something AR has in common with a great many ways we already buffer and mediate what we experience as we move through urban space, from listening to music to wearing sunglasses. All of these impose a certain distance between us and the full experiential manifold of the street, either by baffling the traces of it that reach our senses, or by offering us a space in which we can imagine and project an alternative narrative of our actions.
But there’s a special asymmetry that haunts our interactions with networked technology, and tends to undermine our psychic investment in the immediate physical landscape; if “cyberspace is where you are when you’re on the phone,” it’s certainly also the “place” you are when you text or tweet someone while walking down the sidewalk. I’ve generally referred to what happens when someone moves through the city while simultaneously engaged in some kind of remote interaction as a condition of “multiple adjacency,” but of course it’s really no such thing: so far, at least, only one mode of spatial experience can be privileged at a given time. And if it’s impossible to participate fully in both of these realms at once, one of them must lose out.
Watch what happens when a pedestrian first becomes conscious of receiving a call or a text message, the immediate damming they cause in the sidewalk flow as they pause to respond to it. Whether the call is made hands-free or otherwise doesn’t really seem to matter; the cognitive and emotional investment in what transpires in the interface is what counts, and this investment is generally so much greater than it is in the surroundings that street life clearly suffers as a result. The risk inherent in this divided attention appears to be showing up in the relevant statistics in the form of an otherwise hard-to-account-for upturn in accidents involving pedestrian fatalities, where such numbers had been falling for years. This is a tendency that is only likely to be exacerbated by augmentive mediation, particularly where content of high inherent emotional involvement is concerned.
At this moment in time, it would be hard to exaggerate the appeal the prospect of wearable augmentation holds for its vocal cohort of enthusiasts within the technology community. This fervor can be difficult to comprehend, so long as AR is simply understood to refer to a class of technologies aimed at overlaying the visual field with information about the objects and circumstances in it.
What the discourse around AR shares with other contemporary trans- and posthuman narratives is a frustration with the limits of the flesh, and a frank interest in transcending them through technical means. To advocates, the true appeal of projects like Google’s Glass is that they are first steps toward the fulfillment of a deeper promise: that of becoming-cyborg. Some suggest that ordinary people mediate the challenges of everyday life via complex informational dashboards, much like those first devised by players of World of Warcraft and similar massively multiplayer online role-playing games. The more fervent dream of a day when their capabilities are enhanced far beyond the merely human by a seamless union of organic consciousness with networked sensing, processing, analytic and storage assets.
Beyond the profound technical and practical challenges involved in achieving any such goal, though, someone not committed to one or another posthuman program may find that they have philosophical reservations with this notion, and what it implies for urban life. These may be harder to quantify than strictly practical objections, but any advocate of augmentation technologies who is also interested in upholding the notion of a city as a shared space will have to come to some reckoning with them.
Anyone who cares about what we might call the full bandwidth of human communication — very much including transmission and reception of those cues vital to understanding, but only present beneath the threshold of conscious perception — ought to be concerned about the risk posed to interpersonal exchanges by augmentive mediation. Wearable devices clearly have the potential to exacerbate existing problems of self-absorption and mutual inconsideration. Although in principle there’s no reason such devices couldn’t be designed to support or even enrich the sense of intersubjectivity, what we’ve seen about the technologically-mediated pedestrian’s unavailability to the street doesn’t leave us much room for optimism on this count. The implication is that if the physical environment doesn’t fully register to a person so equipped, neither will other people.
Nor is the body by any means the only domain that the would-be posthuman subject may wish to transcend via augmentation. Subject as it is to the corrosive effects of entropy and time, forcing those occupying it to contend with the inconvenient demands of others, the built environment is another. Especially given current levels of investment in physical infrastructure in the United States, there is a very real risk that those who are able to do so will prefer retreat behind a wall of mediation to the difficult work of being fully present in public. At its zenith, this tendency implies both a dereliction of public space and an almost total abandonment of any notion of a shared public realm. This is the scenario imagined by science-fiction author Vernor Vinge in Rainbows End (2006), in which people interact with the world’s common furniture through branded thematic overlays of their choice; it’s a world that can be glimpsed in the matter-of-factly dystopian videos of Keiichi Matsuda, in which a succession of squalid environments come to life only when activated by colorful augmentive animations.
The most distressing consequences of such a dereliction would be felt by those left behind in any rush toward augmentation. What happens when the information necessary to comprehend and operate an environment is not immanent to that environment, but has become decoupled from it? When signs, directions, notifications, alerts and all the other instructions necessary to the fullest use of the city appear only in an augmentive overlay, and as is inevitably the case, that overlay is available to some but not others? What happens to the unaugmented human under such circumstances? The perils would surely extend beyond a mere inability to act on information; the non-adopter of a particularly hegemonic technology almost always places themselves at jeopardy of being seen as a willful transgressor of norms, even an ethical offender. Anyone forgoing augmentation, for whatever reason, may find that they are perceived as somehow less than a full member of the community, with everything that implies for the right to be and act in public.
The deepest critique of all those lodged against augmented reality is sociologist Anne Galloway’s, and it is harder to answer. Galloway suggests that the discourse of computational augmentation, whether consciously or otherwise, “position[s] everyday places and social interactions as somewhat lacking or in need of improvement.” Again there’s this Greshamization, this sense of a zero-sum relationship between AR and a public realm already in considerable peril just about everywhere. Maybe the emergence of these systems will spur us to some thought as to what it is we’re trying so hard to augment. Philip K. Dick once defined reality as “that which refuses to go away when you stop believing in it,” and it’s this bedrock quality of universal accessibility — to anyone at all, at any time of his or her choosing — that constitutes its primary virtue. If nothing else, reality is the one platform we all share, a ground we can start from in undertaking the arduous and never-comfortable process of determining what else we might agree upon. To replace this shared space with the million splintered and mutually inconsistent realities of individual augmentation is to give up on the whole pretense that we in any way occupy the same world, and therefore strikes me as being deeply inimical to the urban project as I understand it. A city in which the physical environment has ceased to function as a common reference frame is, at the very least, terribly inhospitable soil for democracy, solidarity or simple fellow-feeling to take root in.
It may well be that this concern is overblown. There is always the possibility that augmented reality never will amount to very much, or that after a brief period of consideration it’s actively rejected by the mainstream audience. Within days of the first significant nonspecialist publicity around Google Glass, Seattle dive bar The 5 Point became the first commercial establishment known to have enacted a ban on the device, and if we can fairly judge from the rather pungent selection of terms used to describe Glass wearers in the early media commentary, it won’t be the last. By the time you read these words, these weak signals may well have solidified into some kind of rough consensus, at least in North America, that wearing anything like Glass in public space constitutes a serious faux pas. Perhaps this and similar AR systems will come to rest in a cultural-aesthetic purgatory like that currently occupied by Bluetooth headsets, and if that does turn out to be the case, any premature worry about the technology’s implications for the practice of urban democracy will seem very silly indeed.
But something tells me that none of the objections we’ve discussed here will prove broadly dissuasive, least of all my own personal feelings on the subject. For all the hesitations anybody may have, and for all the vulnerabilities even casual observers can readily diagnose in the chain of technical articulations that produces an augmentive overlay, it is hard to argue against a technology that glimmers with the promise of transcendence. Over anything beyond the immediate near term, some form of wearable augmentive device does seem bound to take a prominent role in returning networked information to the purview of a mobile user at will, and thereby in mediating the urban experience. The question then becomes what kind(s) of urbanity will be produced by people endowed with this particular set of capabilities, individually and collectively, and how we might help the unmediated contend with cities unlike any they have known, enacted for the convenience of the ambiguously transhuman, under circumstances whose depths have yet to be plumbed.
Notes on this section
 Grüter T, Grüter M, Carbon CC (2008). “Neural and genetic foundations of face recognition and prosopagnosia”. J Neuropsychol 2 (1): 79–97.
 For early work toward this end, see http://www.cc.gatech.edu/~thad/p/journal/augmented-reality-through-wearable-computing.pdf. The overlay of a blinking outline or contour used as an identification cue, incidentally, has long been a staple of science-ﬁctional information displays, showing up in pop culture as far back as the late 1960s. The earliest appearance I can locate is 2001: A Space Odyssey (1968), in which the navigational displays of both the Orion III spaceplane and Discovery itself relied heavily on the trope — this, presumably, because they were produced by the same contractor, IBM. See also Pete Shelley’s music video for “Homosapien” (1981) and the traverse corridors projected through the sky of Blade Runner’s Los Angeles (1982).
 As always, I caution the reader that the specifics of products and services, their availability will certainly change over time. All comments here regarding Nearest Subway pertain to v1.4.
 See discussion of “Superplonk” in [a later section]. http://m.spectrum.ieee.org/podcast/geek-life/profiles/steve-manns-better-version-of-reality
 At the very least, user interface should offer some kind of indication as to the confidence of a proffered identification, and perhaps how that determination was arrived at. See [a later section] on seamfulness.
 Azuma, “Registration Errors in Augmented Reality,” 1997.
 See Governors Highway Safety Association, “Spotlight on Highway Safety: Pedestrian Fatalities by State,” 2010. http://www.ghsa.org/html/publications/pdf/spotlights/spotlight_ped.pdf; similarly, a recent University of Utah study found that the act of immersion in a conversation, rather than any physical aspect of use, is the primary distraction while driving and talking on the phone. That hands-free headset may not keep you out of a crash after all. http://www.informationweek.com/news/showArticle.jhtml?articleID=205207840
 A story on the New York City-based gossip site Gawker expressed this point of view directly, if rather pungently: “If You Wear Google’s New Glasses, You Are An Asshole.” http://gawker.com/5990395/if-you-wear-googles-new-glasses-you-are-an-asshole
 The differentiation involved might be very fine-grained indeed. Users may interact with informational objects that exist only for them and for that single moment.
 The first widespread publicity for Glass coincided with Google’s release of a video on Wednesday, 20th February, 2013; The 5 Point announced its ban on 5th March. The expressed concerns center more on the device’s data-collection capability than anything else: according to owner Dave Meinert, his customers “don’t want to be secretly filmed or videotaped and immediately put on the Internet,” and this is an entirely reasonable expectation, not merely in the liminal space of a dive bar but anywhere in the city. See http://news.cnet.com/8301-1023_3-57573387-93/seattle-dive-bar-becomes-first-to-ban-google-glass/
Consider this a shooting script for one of those concept videos so beloved of the big technology vendors. If you find my reading here tendentious, I can assure you that every element of the scenario I present here has been drawn directly from the website copy or other promotional literature of IBM, Cisco, Siemens, Living PlanIT, Gale International (i.e. Songdo) or Masdar.
Daybreak on a Wednesday in April, sometime in the first third of the twenty-first century. The lights come up slowly in Maria Villanueva’s condo, forty-seven stories up the side of the soaring Phase III development. It’s a few weeks past the first anniversary of Maria’s arrival in Noblessity, and in some ways she’s still getting used to the way she lives in this brand-new city of ten square kilometers, so recently and famously reclaimed from the ocean itself.
Her building, for example: a daringly helical twist of stacked apartment units, devised by a name-brand Danish architectural practice. Back home she could never have afforded to live in anything remotely like this — and that’s if there even were buildings like this at home in the first place, which she doubts. This morning the active shutters, sensing a rare onshore breeze, have deployed microfilaments to trap the moisture in the air, softly hazing them at the edges so they seem to blur into the murky sunlight. Even the soft light that makes it through is too bright for Maria, though, and she clutches vaguely at bedside for her phone so she can launch the app that controls the windowshades.
Maria’s husband Mark left for work hours ago — he’s a lawyer negotiating EMEA rebroadcast rights for an American basketball league, and his teleconferences tend to happen on Los Angeles time. So on this Wednesday morning, she finds she has the apartment to herself. She drags herself from bed, shouts for the kitchen to fix her a latte and heads to the en-suite bathroom.
Headlines stack up on the mirror, and Maria scans them as she blowdries her hair: “Climate talks enter a third fruitless…guest-worker privileges revoked following…Royal scandal erupts as Mail drone captures…” None of this seems like it will immediately bear on her work, and just as quickly as the headlines arrive she dismisses them, with the mere swipe of a fingertip.
The walk-in closet has an app to choose outfits appropriate to the weather, but the weather’s always the same here — punishingly hot and dry outside, and invariably a comfortable 72º everywhere that isn’t. Maria has never once launched the app. She gives herself a last quick once-over in the full-length, pats down a few vagrant strands of hair, and then it’s off to work.
Maria belongs to an elite team of analysts tasked with riding herd on autonomous trading algorithms for a City of London-based financial concern. After a solid six months in which she made a newcomers’s show of diligence, she’d rather gotten used to the luxury of working from home most days of the week, but in the interests of team cohesion senior management has just issued a policy forbidding this. And so once again she finds herself faced with the necessity of a twice-daily commute between the ranked condos of the residential zone and the supertowers of the Central Business District.
This is not, as it happens, a huge imposition. The mobility fee is included in her compensation package, and actually, the drive isn’t so bad; depending on traffic and the precise route chosen by the car, it takes anywhere from ten to fifteen minutes. Maria knows from experience that if she calls the car service as she walks out the front door of her unit, her car will be pulling up under the porte-cochère just as she gets there. And so it is this morning, the elevator, as always, alert to the patterns of movement within the building and therefore empty of anyone else. She momentarily realizes she’s forgotten, again, to shut off the lights in the closet, but it doesn’t matter; but for the low-level autonomic systems, everything in the condo fades to black thirty seconds after the unit detects a lack of human presence.
The briefest blast of desiccating heat, and then she’s safely into the car. Today’s car is a little funky, a little foul — not so much that somebody had actually smoked a cigar in it, but maybe that it had recently been used by somebody who smoked a lot of cigars. And used rather too much cologne. Maria punches the air conditioning to its highest setting and tries to breathe through her mouth.
There’s apparently been a fender-bender on the Grand Axial, and the car is rerouted around it without so much as a peep. And so Maria finds that her way to work this morning takes her via the Coastal Ringway, past the three enormous pipelines that supply Noblessity with fresh water from the mainland. This is provided by the host nation at no expense, for the duration of the developer’s 99-year lease on the land — just one of the many ways the host nation expresses its gratitude for the massive infusion of talent and capital sitting just offshore. Of course it’s been awhile since Maria crossed the causeway; truth be told, she only does so on her way to or from the airport. But she keeps meaning to drag Mark over for a visit, get a taste for how the people here really live, and one of these weekends she’s sure they will.
Just past the ten-story screen that fronts the Museum of Contemporary Art, as the car passes beneath the overway heralding entry into the CBD, the windshield starts to pulse red. The soft bonging of an awareness alert issues from the dashboard, and there is the slightest sideways lurch as the car moves to put some distance between itself and a disturbance rapidly approaching in the curbside lane. On the sidewalk ahead, a man in the yellow coveralls of a guest worker is visibly struggling with two Public Safety men. The windshield overlay has identified him as a PDP, or Potentially Disruptive Person. Ever since the bombings in Rio, of course, everyone’s been a little bit on edge, and feeling the slightest bit guilty that she’d ignored the headline earlier in the morning, Maria taps a finger on the windshield for more information. The public scanners have registered an unidentifiable, roughly weapon-sized object under the man’s clothing; and this, correlated with his location and immigration status, is surely enough to trip the threat-detection algorithm’s probability threshold.
But they’re barely abreast of the disturbance before a Public Safety van has whisked up to the curb, and amid a sudden bloom of khaki PS uniforms the guest worker is hustled in and away. Maria’s car torques up with the silent immediacy of electric drive; with a quick and almost subliminal sigh, she releases the tension she barely knew she was carrying, and the unpleasantness rapidly dwindles in the rearview mirror.
Before long the car glides to a halt in front of the Bourse, and the door pops open to let Maria exit before heading off to its next booking. Maria places great stock in mindfulness, so today as every day she takes a moment to pause for a moment, breathe, and contemplate the massive visualization that pulses across the entire width and breadth of the façade. It’s hard to make out in direct sunlight, but if you shield your eyes and look carefully you can see how the whole surface of the building shimmers with graphics representing real-time trading activity.
At this hour, it’s still last night in Chicago and New York, and half a day yet before the London and Frankfurt exchanges open. So the activity dancing across the façade is all the Nikkei, the Hang Seng and the CSI 300…and the blips of an algorithm she and her colleagues have dubbed Dirty Frank, leaving its bizarre and so-far unfathomed spoor of stochastic trades across the minutes.
The view on Maria’s desk, of course, is more sophisticated by far than the poppy visualization splashed across the façade. Her job is to reverse-engineer algorithms like Dirty Frank, determine the logic driving each one, and help her firm develop tactics to counter them. The few hours of morning work pass quickly, as work always will for someone who is paid well to do what she’s good at, and loves what she is paid to do, and lunchtime rolls around before she knows it.
Everyone knows how awkward it can be to socialize with folks working in different backgrounds, so Maria’s agenda app has booked her for lunch in a restaurant rated highly over the past six weeks by people whose activity on Noblessity’s resident-only social network suggests a high degree of compatibility. But when she gets out onto the Plaza, she finds it unusually, even alarmingly crowded, and asks one of her building’s uniformed concierges if he knows what’s going on.
It seems a private shopper for one of the luxury boutiques on the Skydeck level, deputized to serve one of the members of the boy band that played the Performing Arts Center last night, has uploaded a brief video of her charge shimmying into a tight new pullover — and of course the time- and location-stamped video has gone viral locally. In the fullness of time the shopper will be fired, doubtlessly, but the damage is already done. A lengthening line of cars waits to disgorge passengers at each of the bays around the plaza’s perimeter, and the walks and overways are perceptibly starting to fill with giddy young women.
The mast-mounted cameras high above Bourse Plaza have, of course, identified the potentially troublesome concentration of pedestrians, just as roadbed sensors register the increased traffic load and flag it for immediate attention. It’s just after shift change in Noblessity’s Intelligent Operations Center deep beneath the streets, and the fresh crew is quick to respond to the emergent condition – except for special occasions like the annual Jazz Festival, management likes to keep densities in the CBD low, and the oversight team’s contractual performance incentives depend on keeping the sidewalks at Level of Service C or better.
Ordinarily, of course, this isn’t an issue; between the oppressive heat and the long, triumphal blocks, nobody tends to walk very much or very far in Noblessity. Thanks to the private shopper’s indiscretion, though, today is shaping up to be different. Traffic on the sidewalks has started to thicken, contraflow movement is beginning to be difficult, one or two leading indicators of social distress have started to show up on the Big Board. It’s little more than threshold activity at this point, but if nobody issues a command override, active countermeasures will be deployed…and mindful of those incentives, nobody does. Up go the bollards around the plaza, down go the gates on the overways, and one after another, all of the signals turn green on all of the routes leaving the area.
Maria finds herself rerouted for the second time this day, this time on foot. Her phone runs a few quick calculations against her standing parameters and winds up recommending a trattoria-style Italian place she’s never thought to try before, just the other side of the World Expo Center — happy serendipity. Of everything on the menu, there are only a few options lit up on the tabletop as falling within her current diet guidelines, but the Caesar salad she chooses is delicious. The ten-minute walk back to work mostly takes her through temperature-controlled spaces, while between them the gorgeous, ethnic-inspired patterns of the active brise-soleils have unfolded to shield the walkways from the worst of the noonday sun. Even the more visible crowd-dispersion measures have faded back.
By the time Maria calls it a day, the East Asian markets are long closed, but NASDAQ’s just getting started. With a brief series of taps, she formally passes operational responsibility to her New York-based colleagues, and puts her desk to sleep. Her drive home is daydreamy, if a bit subdued — the billboards along the route all seem to be down, and she watches them drift by in a succession of vivid frames the color of clear sky.
After she’s changed into workout clothes, Maria orders a car to the Recreation Zone. Despite the heat, she loves to run along the manicured paths set between the lakes and fountains, to measure her progress against the countersunk lighting pavers. At the entrance to Oceanside Park, a two-man construction crew with a miniature backhoe is digging up the sensors they emplaced just last year — management has sourced a newer model, cheaper and more capable. True to every word of the promises the headhunter made, Noblessity is continuously in the process of being upgraded.
As Maria huffs around the outer loop, her sunglasses keep a running tally of the calories she’s burning, representing them as a blue line climbing diagonally across her peripheral vision. As the blue of her efforts finally begins to track the green of the optimal curve set by her company’s employee wellness plan, she feels a tight glow of satisfaction well up inside her. A brief flourish of trumpets in her earbuds and an animated burst of fireworks means she’s unlocked a mileage target achievement. This will mean new options at dinner for sure.
The original plan for the evening was to meet Mark for dinner at the new robata grill on the garden level of Entertainment Sector South. But just as she turns into her final lap, Maria’s sunglasses light up with a call. It’s Mark; it turns out that he’s exhausted from what has been a long and arduous day of strategy sessions, and feeling pretty burnt out herself, they decide to meet up at home and order in. She knows from experience that she won’t even need to call for a car — the service’s adaptive load-balancing algorithm knows the fall of darkness will always mean a line of people who need rides home from the park — and the condo is mere minutes away.
Of the many amenities provided by her building, among Maria’s very favorites is the one she now avails herself of: ordered meals, like care packages from home and other deliveries, are deposited in the autolocker, so she doesn’t even need to deal with the delivery boy. Mark orders with a few taps on the kitchen screen, and they catch each other up on their respective days during the twenty or so minutes that go by before the autolocker chimes to announce the arrival of their dinner. They grab a few napkins and their containers of food and settle back on the couch to buy a movie from the wallscreen.
Before it’s even a third over, though, Maria realizes with a start that she’s started to nod off. She plants a kiss on the top of her husband’s head and pads off to bed. Just as she slides between the sheets, the briefest prayer of acknowledgment escapes her lips, a prayer of gratitude for another day of health, profit and productivity, another day in balance, another day in Noblessity.
It’s been a big week hereabouts. In particular, two pieces of Do projects news to share with you:
– As you probably know, Nurri and I have been running Systems/Layers “walkshops” under the Do aegis for the last year or so, in cities from 65°N to 41°S.
As we define it, anyway, a walkshop is an activity in which anywhere up to about twenty people take a slow and considered walk through the city together, carefully examining the urban fabric and the things embedded in it, and then sharing their insights with one another and the wider world. (Obviously, you could do a walkshop on any particular urbanist topic that interested you, but we’ve focused ours on looking at the ways in which networked information-processing systems increasingly condition the mretropolitan experience.)
We’ve gotten a huge kick out of doing the Systems/Layers walks, but the simple truth is that there are so many competing claims on our time and energy that we can’t dedicate ourselves to running them full-time. We’ve also been encouraged by the result of our first experiment in open-sourcing the idea, the Systems/Layers event Mayo Nissen held in Copenhagen last June.
So when Giles Lane at Proboscis asked us if we’d consider contributing to his Transformations series, we knew right away just what we’d do. We decided to put together a quick guide to DIY walkshops, something to cover the basics of organizing, promoting and executing an event.
Last Monday, with Giles’s patient support, this idea came to fruition in the launch of Do 1101, Systems/Layers: How to run a walkshop on networked urbanism as a Diffusion eBook pamphlet. As with most things we offer, the pamphlet is released to you under the terms of a Creative Commons Attribution-Noncommercial-Sharealike license, so we expect that some of you will want to get in there and repurpose the content in other contexts.
We’ll most likely be rereleasing the Systems/Layers material our ownselves in the near future, in an extended dance mix that includes more detail, more structure, and more of Nurri’s pictures. In the meantime, we hope you enjoy the pamphlet, and let us know about the uses to which you put it.
Safety Maps is a free online tool that helps you plan for emergency situations. You can use it to choose a safe meeting place, print a customized map that specifies where it is, and share this map with your loved ones. (As it says on the site, the best way to understand how it works is simply to get started making a Safety Map of your own.)
It’s been a delicate thing to build. Given the entire framing of the site, it and the maps it produces absolutely have to work in their stated role: coordinating the action of couples, households and other small groups under the most trying of circumstances, when communications and other infrastructures may simply be unavailable. They have to do so without implying that a particular location is in fact safer than any other under a given set of conditions, or would remain accessible in the event of disaster. And they have to do so legibly, clearly, and straightforwardly.
These are utilitarian preparedness/resilience considerations, and they’re eminently appropriate. But in the end, the site springs from a different set of concerns: in Nurri’s original conception, the primary purpose of these artifacts is to prompt us to think about the people we love and the utter and harrowing contingency of the circumstances that allow us to be together. We obviously hope people find Safety Maps useful in challenging moments, but we imagine that we’d hear about this either way — whereas it’s difficult, if not impossible, for us to ever know if the site works in the way she intended it to.
Even though it was an accident of timing, Nurri also had some questions about releasing Safety Maps so soon on the heels of the Sendai earthquake/tsunami; she didn’t want us to appear to be opportunists reaping ghoulish benefit from the suffering of others. I think it was the right decision, though: sadly, there are in truth precious few windows between natural or manmade catastrophes of one sort or another. And there may be no more productive time for a tool like this than a moment in which disaster is in the news and fresh on a lot of people’s minds.
From my perspective, there’s been one other notable feature of the journey Safety Maps has taken from conception to release: but for an inversion of name, emphasis and colorway (from “Emergency Maps” in red to what you see at present), the site looks, feels and works almost identically to the vision Nurri described to me in Helsinki in October of 2009. In my experience, this almost never happens in the development of a website, and it’s a tribute both to the clarity and comprehensiveness of her original idea, and to Tom and Mike’s resourcefulness and craftsmanship.
I’m also quite fond of the thoughtful little details they’ve built into every layer of the experience, right down to the animated GIFs on the mail you get when you send someone a map. It’s just a lovely thing, and I’m terribly proud to have had even a tiny role in helping Nurri, Tom and Mike build it. Our thanks, also, to Cloudmade and the entire community of Open Street Map contributors, without whom Safety Maps would have remained nothing more than a notion.
I’m halfway through Reinventing the Automobile at the moment, which I figure represents the final comprehensive statement of Bill Mitchell’s thinking about urban mobility. As you’d imagine, it’s a passionately-held and painstakingly worked-out vision, basically the summation of all the work anyone with an interest in the space has seen in dribs and drabs over the past few years; it’s clear, for example, that this is what all the work on P.U.M.A. and MIT CityCar was informed by and leading towards.
In outline, Reinventing presents the reader with four essential propositions about the nature of next-generation urban mobility, none of which I necessarily disagree with prima facie:
– That the design principles and assumptions underlying the contemporary automobile — descended as they are, in an almost straight line, from the horseless carriage — are badly obsolete. Specifically, industry conventions regarding a vehicle’s source of motive power, drive and control mechanism, and mode of operation ought to be discarded in their entirety and replaced with ones more appropriate to an age of dense cities, networks, lightweight materials, clean energy and great personal choice.
– That mobility itself is being transformed by information; that extraordinary efficiencies can be realized and tremendous amounts of latent value unlocked if passenger, vehicle and the ground against which both are moving are reconceived as sources and brokers of, and agents upon, real-time data. (Where have I heard that before?)
– That the physical and conceptual infrastructure underlying the generation, storage and distribution of energy is also, and simultaneously, being transformed by information, with implications (again) for the generation of motive power, as well as the provision of environmental, information, communication and entertainment services to vehicles.
– That the above three developments permit (compel?) the wholesale reconceptualization of vehicles as agents in dynamic pricing markets for energy, road-space and parking resources, as well as significantly more conventional vehicle-share schemes.
It’s only that last one that I have any particular quibbles with. Even before accounting for the creepy hints of emergent AI in commodity-trading software I keep bumping up against (and that’s only meant about 75% tongue-in-cheek), I’m not at all convinced that empowering mobile software avatars to bid on road resources in tightly-coupled, nanosecond loops will ever lead to anything but the worst and most literal sort of gridlock.
But that’s not the real problem I have with this body of work. What I really tripped over, as I read, was the titanic dissonance between the MIT vision of urban life and mobility and the one that I was immersed in as I rode the 33 bus across town. It’s a cheap shot, maybe, but I just couldn’t get past the gulf between the actual San Franciscans around me — the enormous, sweet-looking Polynesian kid lost in a half-hour-long spell of autistic head-banging that took him from Oak and Stanyan clear into the Mission; the grizzled but curiously sylphlike person of frankly indeterminate gender, stepping from the bus with a croaked “God bless you, driver” — and the book’s depiction of sleekly silhouetted personae-people reclining into the Pellicle couches of their front-loading CityCars.
Any next-generation personal mobility system that didn’t take the needs and capabilities of people like these — no: these people, as individuals with lives and stories — into account…well, I can’t imagine that any such thing would be worth the very significant effort of bringing it into being. And despite some well-intentioned gestures toward the real urban world in the lattermost part of the book, projected mobility-on-demand sitings for Taipei and so on, there’s very little here that treats present-day reality as anything but something that Shall Be Overcome. It’s almost as if the very, very bright people responsible for Reinventing the Automobile have had to fend off any taint of human frailty, constraint or limitation in order to haul their total vision up into the light. (You want to ask, particularly, if any of them had ever read Aramis.)
Weirdly enough, the whiff of Gesamtkunstwerk I caught off of Reinventing reminded me of nothing so much as a work you’d be hard-pressed to think of as anything but its polar opposite, J.H. Crawford’s Carfree Cities. That, too, is a work where an ungodly amount of effort has been lavished on detailed depictions of the clean-slate future…and that, too, strikes me as refusing to engage the world as it is.
Maybe I wind up so critical of these dueling visions of future cities and mobility in them precisely because they are total solutions, and I’m acutely aware of my own weakness for and tendency toward same. I don’t think I’d mind, at all, living in one of Crawford’s carfree places, nor can I imagine that the MIT cityscape would be anything but an improvement on the status quo (if the devil was hauled out of its details and treated to a righteous ass-whupping). But to paraphrase one of my favorite philosophers, you go to the future with the cities, vehicles and people you have, not the ones you want. I have to imagine — have to — that the truly progressive and meaningful mobility intervention has a lot more to do with building on what people are already doing, and that’s even stipulating the four points above.
Bolt-on kits. Adaptive reuse. Provisional and experimental rezoning. Frameworks, visualizations and models that incorporate existing systems and assets, slowly revealing them (to users, planners, onlookers) to be nothing other than the weavings of a field, elements of a transmobility condition. And maybe someone whose job it is to account for everyone sidelined by the sleek little pods, left out of the renderings when the New Mobility was pitched to its sponsors.
Bottom line: this book is totally worth buying, reading and engaging if you have even the slightest interest in this topic. Its spinal arguments are very well framed, very clearly articulated, constructed in a way that makes them very difficult to mount cogent objections to…and almost certainly irrelevant to the way personal urban mobility is going to evolve, at least at the level of whole systems. And that’s the trouble, really, because so much of the value in the system described in these pages only works as a holism.
Like my every other negotiation with Bill Mitchell’s thought, including both engagements with his work and encounters in person, I want to be convinced. I want to believe. I want to be seduced by the optimism and the confidence that these are the right answers. But ultimately, as on those other occasions, I’m left with the sense that there are some important questions that have gone unasked, and which could not in any event have been satisfactorily answered in the framework offered. It may or may not say more about me than it does about anything else, but I just can’t see how the folks on the 33 Stanyan fit into the MIT futurama.