A place for antiheroic technology
As I seem to have acquired, in some quarters anyway, a reputation as an uncompromising and intractable Luddite where matters of networked technology in everyday domestic life are concerned, I thought I’d share with you today some minor evidence that I’m not unalterably opposed to each and every such appearance. I give you…the Ember.
This is precisely the kind of networked device I might have written off as a near-meaningless frippery a few years ago. It’s a nicely-designed ceramic mug with a rechargeable heating element built into its base, allowing you to set the temperature at which you prefer to drink your coffee or tea.
All it is, really, is a thermostat — but a thermostat in a surprising, and surprisingly welcome, place. There isn’t any computation to speak of going on. The networked aspect is nicely circumspect, and it’s mainly there to let a smartphone app serve as the user interface, keeping the mug itself appropriately stripped down. You pair it with a phone once, on first setup, and that’s it. Everything else is done through the app, and you don’t even need to interact with that too much once you’ve got your preferences dialed in.
I should say that Ember is not perfect, either as a product or as a piece of interaction design. The embedded, multicolor LED fails to communicate much of anything useful, despite its multiple, annoyingly blinky and colorful states; all I really need to know from it is when the mug needs to be recharged. That need arises far too often, at least when it’s set to maintain the temperatures at which I prefer to drink coffee. And inevitably, I have concerns about the nonexistence of any meaningful security measures, a nonexistence that in fairness is endemic to all consumer IoT devices, but remains inexcusable for any of them.
But Ember gets some things right, and when it does, they tend to be very right. By far the most important of these is that it works as a mug, prior to the question of any networked or interactive functionality. The vessel has a good heft to it, and when you set it down on a solid surface, the feeling of a damped but substantial mass that’s transmitted through the rubberized rings at its base is just very, very satisfying. The ceramic surface has a pleasingly velvety texture — so much so, in fact, that you can’t help but wonder if it’s one of those miracle materials that will turn out to have been threshold-carcinogenic twenty or thirty years down the line. It’s gratifyingly easy to clean.
And as far as that additional functionality is concerned, the mug does what it says it will, does it well…and it’s a hoot. It turns out that there’s a real Weiserian frisson to be had from something that violates all the subtle, subconscious expectations you’ve built up over a lifetime of drinking hot beverages from ceramic mugs. The confoundment of assumptions is so deep, indeed, that it takes you awhile to catch up with the new reality — to realize that you can go answer the doorbell or otherwise be distracted for five or ten minutes, and still come back to a piping hot beverage. In fact, Ember stands the principle of evaporative cooling on its head: because the heating element is still set to maintain a larger volume of liquid at a given temperature, but most of that volume will have been drunk away by the time you get to them, your last few swallows are noticeably, delightfully hotter than any you’ve had since first filling the mug.
To be clear, the Ember mug is not something anyone needs, especially at this price point. But I admire its clarity of purpose, in leveraging a modest deployment of technology to furnish its user with a small but nevertheless genuine everyday pleasure. And without wanting to be pompous about matters, I happen to believe there’s a crucial role for small but genuine pleasures in difficult times like the ones we happen to be living through. You may find yourself surprised by the degree to which a sip of hot coffee lands when you sip it forty or forty-five minutes after brewing — at least, I surely was, and am — and how psychoemotionally sustaining it can be when it does. Most of that is probably the coffee itself, doing what it is that coffee does, but better by far a networked product that is modest and humble in its aims, and succeeds in meeting them, than one which promises everything and does none of it particularly well.
“Against the smart city” teaser
The following is section 4 of “Against the smart city,” the first part of The City Is Here For You To Use. Our Do projects will be publishing “Against the smart city” in stand-alone POD pamphlet and Kindle editions later on this month.
UPDATE: The Kindle edition is now available for purchase.
4 | The smart city pretends to an objectivity, a unity and a perfect knowledge that are nowhere achievable, even in principle.
Of the major technology vendors working in the field, Siemens makes the strongest and most explicit statement[1] of the philosophical underpinnings on which their (and indeed the entire) smart-city enterprise is founded: “Several decades from now cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service…The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”
We’ve already considered what kind of ideological work is being done when efforts like these are positioned as taking place in some proximate future. The claim of perfect competence Siemens makes for its autonomous IT systems, though, is by far the more important part of the passage. It reflects a clear philosophical position, and while this position is more forthrightly articulated here than it is anywhere else in the smart-city literature, it is without question latent in the work of IBM, Cisco and their peers. Given its foundational importance to the smart-city value proposition, I believe it’s worth unpacking in some detail.
What we encounter in this statement is an unreconstructed logical positivism, which, among other things, implicitly holds that the world is in principle perfectly knowable, its contents enumerable, and their relations capable of being meaningfully encoded in the state of a technical system, without bias or distortion. As applied to the affairs of cities, it is effectively an argument there is one and only one universal and transcendently correct solution to each identified individual or collective human need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something which can be encoded in public policy, again without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)
Every single aspect of this argument is problematic.
— Perfectly knowable, without bias or distortion: Collectively, we’ve known since Heisenberg that to observe the behavior of a system is to intervene in it. Even in principle, there is no way to stand outside a system and take a snapshot of it as it existed at time T.
But it’s not as if any of us enjoy the luxury of living in principle. We act in historical space and time, as do the technological systems we devise and enlist as our surrogates and extensions. So when Siemens talks about a city’s autonomous systems acting on “perfect knowledge” of residents’ habits and behaviors, what they are suggesting in the first place is that everything those residents ever do — whether in public, or in spaces and settings formerly thought of as private — can be sensed accurately, raised to the network without loss, and submitted to the consideration of some system capable of interpreting it appropriately. And furthermore, that all of these efforts can somehow, by means unspecified, avoid being skewed by the entropy, error and contingency that mark everything else that transpires inside history.
Some skepticism regarding this scenario would certainly be understandable. It’s hard to see how Siemens, or anybody else, could avoid the slippage that’s bound to occur at every step of this process, even under the most favorable circumstances imaginable.
However thoroughly Siemens may deploy their sensors, to start with, they’ll only ever capture the qualities about the world that are amenable to capture, measure only those quantities that can be measured. Let’s stipulate, for the moment, that these sensing mechanisms somehow operate flawlessly, and in perpetuity. What if information crucial to the formulation of sound civic policy is somehow absent from their soundings, resides in the space between them, or is derived from the interaction between whatever quality of the world we set out to measure and our corporeal experience of it?
Other distortions may creep into the quantification of urban processes. Actors whose performance is subject to measurement may consciously adapt their behavior to produce metrics favorable to them in one way or another. For example, a police officer under pressure to “make quota” may issue citations for infractions she would ordinarily overlook; conversely, her precinct commander, squeezed by City Hall to present the city as an ever-safer haven for investment, may downwardly classify[2] felony assault as a simple misdemeanor. This is the phenomenon known to viewers of The Wire as “juking the stats[3],” and it’s particularly likely to happen when financial or other incentives are contingent on achieving some nominal performance threshold. Nor is it the only factor likely to skew the act of data collection; long, sad experience suggests that the usual array of all-too-human pressures will continue to condition any such effort. (Consider the recent case in which Seoul Metro operators were charged with using CCTV cameras to surreptitiously ogle women passengers[4], rather than scan platforms and cars for criminal activity as intended.)
What about those human behaviors, and they are many, that we may for whatever reason wish to hide, dissemble, disguise, or otherwise prevent being disclosed to the surveillant systems all around us? “Perfect knowledge,” by definition, implies either that no such attempts at obfuscation will be made, or that any and all such attempts will remain fruitless. Neither one of these circumstances sounds very much like any city I’m familiar with, or, for that matter, would want to be.
And what about the question of interpretation? The Siemens scenario amounts to a bizarre compound assertion that each of our acts has a single salient meaning, which is always and invariably straightforwardly self-evident — in fact, so much so that this meaning can be recognized, made sense of and acted upon remotely, by a machinic system, without any possibility of mistaken appraisal.
The most prominent advocates of this approach appear to believe that the contingency of data capture is not an issue, nor is any particular act of interpretation involved in making use of whatever data is retrieved from the world in this way. When discussing their own smart-city venture, senior IBM executives[5] argue, in so many words, that “the data is the data”: transcendent, limpid and uncompromised by human frailty. This mystification of “the data” goes unremarked upon and unchallenged not merely in IBM’s material, but in the overwhelming majority of discussions of the smart city. But different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few meters. Perceptions of risk in a neighborhood can be transformed by altering the taxonomy used to classify reported crimes ever so slightly[6]. And anyone who’s ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey. The fact is that the data is never “just” the data, and to assert otherwise is to lend inherently political and interested decisions regarding the act of data collection an unwonted gloss of neutrality and dispassionate scientific objectivity.
The bold claim of perfect knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it’s astonishing that anyone would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful.
— One and only one solution: With their inherent, definitional diversity, layeredness and complexity, we can usefully think of cities as tragic. As individuals and communities, the people who live in them hold to multiple competing and equally valid conceptions of the good, and it’s impossible to fully satisfy all of them at the same time. A wavefront of gentrification can open up exciting new opportunities for young homesteaders, small retailers and craft producers, but tends to displace the very people who’d given a neighborhood its character and identity. An increased police presence on the streets of a district reassures some residents, but makes others uneasy, and puts yet others at definable risk. Even something as seemingly straightforward and honorable as an anticorruption initiative can undo a fabric of relations that offered the otherwise voiceless at least some access to local power. We should know by now that there are and can be no[7] Pareto-optimal solutions for any system as complex as a city.
— Arrived at algorithmically: Assume, for the sake of argument, that there could be such a solution, a master formula capable of resolving all resource-allocation conflicts and balancing the needs of all a city’s competing constituencies. It certainly would be convenient if this golden mean could be determined automatically and consistently, via the application of a set procedure — in a word, algorithmically.
In urban planning, the idea that certain kinds of challenges are susceptible to algorithmic resolution has a long pedigree. It’s already present in the Corbusian doctrine that the ideal and correct ratio of spatial provisioning in a city can be calculated from nothing more than an enumeration of the population, it underpins the complex composite indices of Jay Forrester’s 1969 Urban Dynamics[8], and it lay at the heart of the RAND Corporation’s (eventually disastrous) intervention in the management of 1970s New York City[9]. No doubt part of the idea’s appeal to smart-city advocates, too, is the familial resemblance such an algorithm would bear to the formulae by which commercial real-estate developers calculate air rights, the land area that must be reserved for parking in a community of a given size, and so on.
In the right context, at the appropriate scale, such tools are surely useful. But the wholesale surrender of municipal management to an algorithmic toolset — for that is surely what is implied by the word “autonomous” — would seem to repose an undue amount of trust in the party responsible for authoring the algorithm. At least, if the formulae at the heart of the Siemens scenario turn out to be anything at all like the ones used in the current generation of computational models, critical, life-altering decisions will hinge on the interaction of poorly-defined and surprisingly subjective values: a “quality of life” metric, a vague category of “supercreative[10]” occupations, or other idiosyncrasies along these lines. The output generated by such a procedure may turn on half-clever abstractions, in which a complex circumstance resistant to direct measurement is represented by the manipulation of some more easily-determined proxy value: average walking speed stands in for the more inchoate “pace” of urban life, while the number of patent applications constitutes an index of “innovation.”
Even beyond whatever doubts we may harbor as to the ability of algorithms constructed in this way to capture urban dynamics with any sensitivity, the element of the arbitrary we see here should give us pause. Given the significant scope for discretion in defining the variables on which any such thing is founded, we need to understand that the authorship of an algorithm intended to guide the distribution of civic resources is itself an inherently political act. And at least as things stand today, neither in the Siemens material nor anywhere else in the smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.
— Encoded in public policy, and applied transparently, dispassionately and in a manner free from politics: A review of the relevant history suggests that policy recommendations derived from computational models are only rarely applied to questions as politically sensitive as resource allocation without some intermediate tuning taking place. Inconvenient results may be suppressed, arbitrarily overridden by more heavily-weighted decision factors, or simply ignored.
The best-documented example of this tendency remains the work of the New York City-RAND Institute, explicitly chartered to implant in the governance of New York City “the kind of streamlined, modern management that Robert McNamara applied in the Pentagon with such success[11]” during his tenure as Secretary of Defense (1961-1968). The statistics-driven approach that McNamara’s Whiz Kids had so famously brought to the prosecution of the war in Vietnam, variously thought of as “systems analysis” or “operations research,” was first applied to New York in a series of studies conducted between 1973 and 1975, in which RAND used FDNY incident response-time data[12] to determine the optimal distribution of fire stations.
Methodological flaws undermined the effort from the outset. RAND, for simplicity’s sake, chose to use the time a company arrived at the scene of a fire as the basis of their model, rather than the time at which that company actually began fighting the fire; somewhat unbelievably, for anyone with the slightest familiarity with New York City, RAND’s analysts then compounded their error by refusing to acknowledge traffic as a factor in response time[13]. Again, we see some easily-measured value used as a proxy for a reality that is harder to quantify, and again we see the distortion of ostensibly neutral results by the choices made by an algorithm’s designers. But the more enduring lesson for proponents of data-driven policy has to do with how the study’s results were applied. Despite the mantle of coolly “objective” scientism that systems analysis preferred to wrap itself in, RAND’s final recommendations bowed to factionalism within the Fire Department, as well as the departmental leadership’s need to placate critical external constituencies; the exercise, in other words, turned out to be nothing if not political.
The consequences of RAND’s intervention were catastrophic. Following their recommendations, fire battalions in some of the most vulnerable sections of the city were decommissioned, while the department opened other stations in low-density, low-threat areas; the spatial distribution of firefighting assets remaining actually prevented resources from being applied where they were most critically needed. Great swaths of the city’s poorest neighborhoods burned to the ground as a direct result — most memorably the South Bronx, but immense tracts of Manhattan and Brooklyn as well. Hundreds of thousands of residents were displaced, many permanently, and the unforgettable images that emerged fueled perceptions of the city’s nigh-apocalyptic unmanageability that impeded its prospects well into the 1980s. Might a less-biased model, or a less politically-skewed application of the extant findings, have produced a more favorable outcome? This obviously remains unknowable…but the human and economic calamity that actually did transpire is a matter of public record.
Examples like this counsel us to be wary of claims that any autonomous system will ever be entrusted with the regulation and control of civic resources — just as we ought to be wary of claims that the application of some single master algorithm could result in an Pareto-efficient distribution of resources, or that the complex urban ecology might be sufficiently characterized in data to permit the effective operation of such an algorithm in the first place. For all of the conceptual flaws we’ve identified in the Siemens proposition, though, it’s the word “goal” that just leaps off the page. In all my thinking about cities, it has frankly never occurred to me to assert that cities have goals. (What is Cleveland’s goal? Karachi’s?) What is being suggested here strikes me as a rather profound misunderstanding of what a city is. Hierarchical organizations can be said to have goals, certainly, but not anything as heterogeneous in composition as a city, and most especially not a city in anything resembling a democratic society.
By failing to account for the situation of technological devices inside historical space and time, the diversity and complexity of the urban ecology, the reality of politics or, most puzzlingly of all, the “normal accidents[14]” all complex systems are subject to, Siemens’ vision of cities perfectly regulated by autonomous smart systems thoroughly disqualifies itself. But it’s in this depiction of a city as an entity with unitary goals that it comes closest to self-parody.
If it seems like breaking a butterfly on a wheel to subject marketing copy to this kind of dissection, I am merely taking Siemens and the other advocates of the smart city at their word, and this is what they (claim to) really believe. When pushed on the question, of course, some individuals working for enterprises at the heart of the smart-city discourse admit that what their employers actually propose to do is distinctly more modest: they simply mean to deploy sensors on municipal infrastructure, and adjust lighting levels, headway or flow rates to accommodate real-time need. If this is the case, perhaps they ought to have a word with their copywriters, who do the endeavor no favors by indulging in the imperial overreach of their rhetoric. As matters now stand, the claim of perfect competence that is implicit in most smart-city promotional language — and thoroughly explicit in the Siemens material — is incommensurate with everything we know about the way technical systems work, as well as the world they work in. The municipal governments that constitute the primary intended audience for materials like these can only be advised, therefore, to approach all such claims with the greatest caution.

Notes
[1] Siemens Corporation. “Sustainable Buildings — Networked Technologies: Smart Homes and Cities,” Pictures of the Future, Fall 2008.
• foryoutou.se/siemenstotal
[2] For example, in New York City, an anonymous survey of “hundreds of retired high-ranking [NYPD] officials” found that “tremendous pressure to reduce crime, year after year, prompted some supervisors and precinct commanders to distort crime statistics” they submitted to the centralized COMPSTAT system. Chen, David W., “Survey Raises Questions on Data-Driven Policy,” The New York Times, 08 February 2010.
• foryoutou.se/jukingthenypd
[3] Simon, David, Kia Corthron, Ed Burns and Chris Collins, The Wire, Season 4, Episode 9: “Know Your Place,” first aired 12 November 2006.
[4] Asian Business Daily. “Subway CCTV was used to watch citizens’ bare skin sneakily,” 16 July 2013. (In Korean.)
• foryoutou.se/seoulcctv
[5] Fletcher, Jim, IBM Distinguished Engineer, and Guruduth Banavar, Vice President and Chief Technology Officer for Global Public Sector, personal communication, 08 June 2011.
[6] Migurski, Michal. “Visualizing Urban Data,” in Segaran, Toby and Jeff Hammerbacher, Beautiful Data: The Stories Behind Elegant Data Solutions, O’Reilly Media, Sebastopol CA, 2012: pp. 167-182. See also Migurski, Michal. “Oakland Crime Maps X,” tecznotes, 03 March 2008.
• foryoutou.se/oaklandcrime
[7] See, as well, Sen’s dissection of the inherent conflict between even mildly liberal values and Pareto optimality. Sen, Amartya Kumar. “The impossibility of a Paretian liberal.” Journal of Political Economy Volume 78 Number 1, Jan-Feb 1970.
• foryoutou.se/nopareto
[8] Forrester, Jay. Urban Dynamics, The MIT Press, Cambridge, MA, 1969.
[9] See Flood, Joe. The Fires: How a Computer Formula Burned Down New York City — And Determined The Future Of American Cities, Riverhead Books, New York, 2010.
[10] See, e.g. Bettencourt, Luís M.A. et al. “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences, Volume 104 Number 17, 24 April 2007, pp. 7301-7306.
• foryoutou.se/superlinear
[11] Flood, ibid., Chapter Six.
[12] Rider, Kenneth L. “A Parametric Model for the Allocation of Fire Companies,” New York City-RAND Institute report R-1615-NYC/HUD, April 1975; Kolesar, Peter. “A Model for Predicting Average Fire Company Travel Times,” New York City-RAND Institute report R-1624-NYC, June 1975.
• foryoutou.se/randfirecos
• foryoutou.se/randfiretimes
[13] See the Amazon interview with Fires author Joe Flood.
• foryoutou.se/randfires
[14] Perrow, Charles. Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984.
On augmenting reality
The following is the draft of a section from my forthcoming book, The City Is Here For You To Use, concerning various ways in which networked devices are used to furnish the mobile pedestrian with a layer of location-specific information superimposed onto the forward view — “augmented reality,” in other words. (The context is an extended discussion of four modes in which information is returned from the global network to the world so it may be engaged, considered and acted upon, which is why the bit here starts in medias res.)
As you see it here, the section is not quite in its final form; it hasn’t yet been edited for meter, euphony or flow, and in particular, some of the arguments toward the end remain too telescoped to really stand up to much inspection. Nevertheless, given the speed at which wearable AR is evolving, I thought it would be better to get this out now as-is, to garner your comments and be strengthened by them. I hope you enjoy it.
1
One seemingly potent way of returning networked information to the world would be if we could layer it directly over that which we perceive. This is the premise of so-called augmented reality, or AR, which proposes to furnish users with some order of knowledge about the world and the objects in it, via an overlay of informational graphics superimposed on the visual field. In principle, this augmentation is agnostic as to the mediating artifact involved, which could be the screen of a phone or tablet, a vehicle’s windshield, or, as Google’s Glass suggests, a lightweight, face-mounted reticle.
AR has its conceptual roots in informational displays developed for military pilots in the early 1960s, at the point when the performance of enemy fighter aircraft began to overwhelm a human pilot’s ability to react. In the fraught regime of jet-age dogfighting, even a momentary dip of the eyes to a dashboard-mounted instrument cluster could mean disaster. The solution was to project information about altitude, airspeed and the status of weapons and other critical aircraft systems onto a transparent pane aligned with the field of vision, a “head-up display.”
This notion turned to have applicability in fields beyond aerial combat, where the issue wasn’t so much reaction time as it was visual complexity. One early AR system was intended to help engineers make sense of the gutty tangle of hydraulic lines, wiring and control mechanisms in the fuselage of an airliner under construction; each component in the otherwise-hopeless confusion was overlaid with a visual tag identifying it by name, and colored according to the system it belonged to.
Other systems were designed to help people manage situations in which both time and the complexity of the environment were sources of pressure — for example, to aid first responders in dispelling the fog and chaos they’re confronted with upon arrival at the scene of an emergency. One prototype furnished firefighters with visors onto which structural diagrams of a burning building were projected, along with symbols indicating egress routes, the position of other emergency personnel, and the presence of electric wiring or other potentially dangerous infrastructural elements.
The necessity of integrating what were then relatively crude and heavy cameras, motion sensors and projectors into a comfortably wearable package limited the success of these early efforts — and this is to say nothing of the challenges posed by the difficulty of establishing a reliable network connection to a mobile unit. But the conceptual heavy lifting done to support these initial forays produced a readymade discourse, waiting for the day augmentation might be reinstantiated in smaller, lighter, more capable hardware.
That is a point we appear to have arrived at with the advent of the smartphone. As we’ve seen, the smartphone handset can be thought of as a lamination together of several different sensing and presentation technologies, subsets of which can be recombined with one another to produce distinctly different ways of engaging networked information. Bundle a camera, accelerometer/gyroscope, and display screen in a single networked handset, and what you have in your hands is indeed an artifact capable of sustaining rudimentary augmentation. Add GPS functionality and a three-dimensional model of the world — either maintained onboard the device, or resident in the cloud — and a viewer can be offered location-specific information, registered with and mapped onto the surrounding urban fabric.
In essence, phone-based AR treats the handset like the transparent pane of a cockpit head-up display: you hold it before you, its camera captures the forward-facing view, and this is rendered on the screen transparently but for whatever overlay of information is applied. Turn and the on-screen view turns with you, tracked (after a momentary stutter) by the grid of overlaid graphics. And those graphics can provide anything the network can: identification, annotation, direction or commentary.
It’s not hard to see why developers and enthusiasts might jump at this potential, even given the sharp limits imposed by the phone as platform. We move through the world and we act in it, but the knowledge we base our movements and actions on is always starkly less than what it might be. And we pay the price for this daily, in increments of waste, frustration, exhaustion and missed opportunity. By contrast, the notion that everything the network knows might be brought to bear on someone or -thing standing before us, directly there, directly present, available to anyone with the wherewithal to sign a two-year smartphone contract and download an app — this is a deeply seductive idea. It offers the same aura of omnipotence, that same frisson of godlike power evoked by our new ability to gather, sift and make meaning of the traces of urban activity, here positioned as a direct extension of our own senses.
2
Why not take advantage of this capability? After all, the richness and complexity of city life confronts us with any number of occasions on which the human sensorium could do with a little help.
Let a few hundred neurons in the middle fusiform gyrus of the brain’s right hemisphere be damaged, or fail to develop properly in the first place, and the result is a disorder called prosopagnosia, more commonly known as faceblindness. As the name suggests, the condition deprives its victims of the ability to recognize faces and associate them with individuals; at the limit, someone suffering with a severe case may be entirely unable to remember what his or her loved ones look like. So central is the ability to recognize others to human socialization, though, that even far milder cases cause significant problems.
Sadly, this is something I can attest to from firsthand experience. Like an estimated 2.5%[1] of the population, I suffer from the condition, and even in the relatively attenuated form I’m saddled with, my broad inability to recognize people has caused more than a few experiences of excruciating awkwardness. At least once or twice a month I run into people on the street who clearly have some degree of familiarity with me, and find myself unable to come up with even a vague idea of who they might be; I’ll introduce myself to a woman at a party, only to have her remind me (rather waspishly, but who can blame her) that we’d worked together on a months-long project. Deprived of contextual cues — the time and location at which I usually meet someone, a distinctive hairstyle or mode of dress — I generally find myself no more able to recognize former colleagues or students than I can complete strangers. And as uncomfortable as this can be for me, I can only imagine how humiliating it is for the person on the other end of the encounter.
I long ago lost track of the number of times in my life at which I would have been grateful for some subtle intercessionary agent: something that might drop a glowing outline over the face of someone approaching me and remind me of his or her name[2], the occasion on which we met last, maybe even what we talked about on that occasion. It would spare both of us from mortification, and shield my counterpart from the inadvertent but real insult implied by my failure to recognize them. So the ambition of using AR in this role is lovely — precisely the kind of sensitive technical deployment I believe in, where technology is used to lower the barriers to socialization, and reduce or eliminate the awkwardnesses that might otherwise prevent us from better knowing one another.
But it’s hard to imagine any such thing being accomplished by the act of holding a phone up in front of my face, between us, forcing you to wait first for me to do so and then for the entire chain of technical events that must follow in order to fulfill the aim at the heart of the scenario. The device must acquire an image of your face with the camera, establish the parameters of that face from the image, and upload those parameters to the cloud via the fastest available connection, so they may be compared with a database of facial measurements belonging to known individuals; if a match is found, the corresponding profile must be located, and the appropriate information from that profile piped back down the connection so it may be displayed as an overlay on the screen image.
Too many articulated parts are involved in this interaction, too many dependencies — not least of which is the coöperation of a Facebook, a Google, or some other enterprise with a reasonably robust database of facial biometrics, and that is of course wildly problematic for other reasons. Better I should have confessed my confusion to you in the first place.
Perhaps a less technologically-intensive scenario would be better suited to the phone as platform for augmentation? How about helping a user find their way around the transit system, amidst all the involutions of the urban labyrinth?
3
Here we can weigh the merits of the use case by considering an actual, shipping product, Acrossair’s Nearest Subway app for the iPhone, first released in 2010[3]. Like its siblings for London and Paris, Nearest Tube and Nearest Metro, Nearest Subway uses open location data made available by the city’s transit authority to specify the positions of transit stops in three-dimensional space. On launch, the app loads a hovering scrim of simple black tiles featuring the name of each station, and icons of the lines that serve it; the tiles representing more distant stations are stacked atop those that are closer. Rotate, and the scrim of tiles rotates with you. Whichever way you face, you’ll see a tile representing the nearest subway station in the direction of view, so long as some outpost of the transit network lies along that bearing in the first place.
Nearest Subway is among the more aesthetically appealing phone-based AR applications, eschewing junk graphics for simple, text-based captions sensitively tuned to the conventions of each city’s transit system. If nothing else, it certainly does what it says on the tin. It is, however, almost completely worthless as a practical aid to urban navigation.
When aimed to align with the Manhattan street grid from the corner of 30th Street and First Avenue, Nearest Subway indicates that the 21st Street G stop in Long Island City is the closest subway station, at a distance of 1.4 miles in a north-northeasterly direction.
As it happens, there are a few problems with this. For starters, from this position the Vernon Boulevard-Jackson Avenue stop on the 7 line is 334 meters, or roughly four New York City blocks, closer than 21st Street, but it doesn’t appear as an option. This is either an exposure of some underlying lacuna in the transit authority’s database — unlikely, but as anyone familiar with the MTA understands implicitly, well within the bounds of possibility — or more probably a failure on Acrossair’s part to write code that retrieves these coordinates properly.
Just as problematically, the claimed bearing is roughly 55 degrees off. If, as will tend to be the case in Manhattan, you align yourself with the street grid, a phone aimed directly uptown will be oriented at 27 degrees east of due north, at which point Nearest Subway suggests that the 21st Street station is directly ahead of you. But it actually lies on an azimuth of 82 degrees; if you took the app at its word, you’d be walking uptown a long time before you hit anything even resembling a subway station. This is most likely to be a calibration error with the iPhone’s compass, but fairly or otherwise Nearest Subway shoulders the greater part of the blame here — as anyone familiar with computational systems has understood since the time of Babbage, if you put garbage in, you’ll get garbage out.
Furthermore, since by design the app only displays those stations roughly aligned with your field of vision, there’s no way for it to notify you that the nearest station may be directly behind your back. Unless you want to rotate a full 360 degrees, then, and make yourself look like a complete idiot in the process, the most practical way to use Nearest Subway is to aim the phone directly down, which makes a reasonably useful ring of directional arrows and distances pop up. (These, of course, could have been superimposed on a conventional map in the first place, without undertaking the effort of capturing the camera image and augmenting it with a hovering overlay of theoretically compass-calibrated information.)
However unfortunate these stumbles may be, they can all be resolved, addressed with tighter code, an improved user interface or a better bearing-determination algorithm. Acrossair could fix them all, though — enter every last issue in a bug tracker, and knock them down one by one — and that still wouldn’t address the primary idiocy of urban AR in this mode: from 30th Street and First Avenue, the 21st Street G stop is across the East River. You need to take a subway to get there in the first place. However aesthetically pleasing an interface may be, using it to find the closest station as the crow flies does you less than no good when you’re separated from it by a thousand meters of water.
Finally, Nearest Subway betrays a root-level misunderstanding of the relationship between a citydweller and a transportation network. In New York City, as in every other city with a complex underground transit system, you almost never find yourself in a situation where you need to find the station that’s nearest in absolute terms to begin with; it’s far more useful to find the nearest station on a line that gets you where you want to go. Even at the cost of cluttering what’s on the screen, then, the very first thing the would-be navigator of the subway system needs is a way to filter the options before them by line.
I raise these points not to park all of the blame at Acrossair’s door, but to suggest that AR itself is badly unsuited to this role, at least when handled in this particular way. It takes less time to load and use a map than it does to retrieve the same information from an augmentive application, and the map provides a great deal more of the context so necessary to orienting yourself in the city. At this point in technological evolution, then, more conventional interface styles will tend to furnish a user with relevant information more efficiently, with less of the latency, error and cruft that inevitably seem to attend the attempt to superimpose it over the field of vision.
4
If phone-based augmentation performs poorly as social lubricant or aid to urban navigation, what about another role frequently proposed for AR, especially by advocates in the cultural heritage sector? This use case hinges on the argument that by superimposing images or other vestiges of the past of a place directly over its present, AR effectively endows its users with the ability to see through time.
This might not make much sense at all in Songdo, or Masdar, or any of the other new cities now being built from scratch on greenfield sites. But anyone who lives in a place old enough to have felt the passage of centuries knows that history can all too easily be forgotten by the stones of the city. Whatever perturbations from historical events may still be propagating through the various flows of people, matter, energy and information that make a place, they certainly aren’t evident to casual inspection. An augmented view returning the layered past to the present, in such a way as to color our understanding of the things all around us, might certainly prove to be more emotionally resonant than any conventional monument.
Byzantium, old Edo, Roman Londinium, even New Amsterdam: each of these historical sites is rife with traces we might wish to surface in the city occupying the same land at present. Locales overwhelmed by more recent waves of colonization, gentrification or redevelopment, too, offer us potent lenses through which to consider our moment in time. It would surely be instructive to retrieve some record of the jazz- and espresso-driven Soho of the 1950s and layer it over what stands there at present; the same goes for the South Bronx of 1975. But traversed as it was during the twentieth century by multiple, high-intensity crosscurrents of history, Berlin may present the ultimate terrain on which to contemplate recuperation of the past.
This is a place where pain, guilt and a sense of responsibility contend with the simple desire to get on with things; no city I’m familiar with is more obsessively dedicated to the search for a tenable balance between memory and forgetting. The very core of contemporary Berlin is given over to a series of puissant absences and artificially-sustained presences, from the ruins of Gestapo headquarters, now maintained as a museum called Topography of Terror, to the remnants of Checkpoint Charlie. A long walk to the east out leafy Karl-Marx-Allee — Stalinallee, between 1949 and 1961 — takes you to the headquarters of the Stasi, the feared secret police of the former East Germany, also open to the public as a museum. But there’s nowhere in Berlin where the curious cost of remembering can be more keenly felt than in the field of 2,711 concrete slabs at the corner of Ebertstrasse and Hannah-Arendt-Strasse. This is the Memorial to the Murdered Jews of Europe, devised by architect Peter Eisenman, with early conceptual help from the sculptor Richard Serra.
Formally, the grim array is the best thing Eisenman has ever set his hand to, very nearly redemptive of a career dedicated to the elevation of fatuous theory over aesthetic coherence; perhaps it’s the Serra influence. But as a site of memory, the Monument leaves a great deal to be desired. It’s what Michel Foucault called a heterotopia: something set apart from the ordinary operations of the city, physically and semantically, a place of such ponderous gravity that visitors don’t quite know what to make of it. On my most recent visit, the canyons between the slabs rang with the laughter of French schoolchildren on a field trip; the children giggled and flirted and shouted to one another as they leapt between the stones, and whatever the designer’s intent may have been, any mood of elegy or commemoration was impossible to establish, let alone maintain.
Roughly two miles to the northeast, on the sidewalk in front of a doner stand in Mitte, is a memorial of quite a different sort. Glance down, and you’ll see the following words, inscribed into three brass cubes set side by side by side between the cobblestones:
HIER WOHNTE
ELSA GUTTENTAG
GEB. KRAMER
JG. 1883
DEPORTIERT 29.11.1942
ERMORDET IN
AUSCHWITZ
HIER WOHNTE
KURT GUTTENTAG
JG. 1877
DEPORTIERT 29.11.1942
ERMORDET IN
AUSCHWITZ
HIER WOHNTE
ERWIN BUCHWALD
JG. 1892
DEPORTIERT 1.3.1943
ERMORDET IN
AUSCHWITZ
Ermordet in Auschwitz: that is, on specific dates in November of 1942 and March of the next year, the named people living at this address were taken across this very sidewalk and forcibly transported hundreds of miles east by the machinery of their own government, to a country they’d never known and a facility expressly designed to murder them. The looming façades around you were the last thing they ever saw as free people.
It’s in the dissonance between the everyday bustle of Mitte and these implacable facts that the true horror resides — and that’s precisely what makes the brass cubes a true memorial, indescribably more effective than Eisenman’s. The brass cubes, it turns out, are Stolpersteine, or “stumbling blocks,” a project of artist Gunter Demnig; these are but three of what are now over 32,000 that Demnig has arranged to have placed in some 700 cities. The Stolpersteine force us to read this stretch of unremarkable sidewalk in two ways simultaneously: both as a place where ordinary people go placidly about their ordinary business, just as they did in 1942, and as one site of a world-historical, continental-scale ravening.
The stories etched in these stones are the kind of facts about a place that would seem to yield to a strategy of augmentation. The objection could certainly be raised that I found them so resonant precisely because I didn’t see them every day, and that their impact would very likely fade with constant exposure; we might call this the evil of banality. But being compelled to see and interpret the mundane things I did in these streets through the revenant past altered my consciousness, in ways subtler and longer-lasting than anything Eisenman’s sepulchral array of slabs was able to achieve. AR would merely make the metaphor literal — in fact, it’s easy for me to imagine the disorienting, decentering, dis-placing impact of having to engage the world through a soft rain of names, overlaid onto the very places from which their owners were stolen.
But once again, it’s hard to imagine this happening via the intercession of a handset. Nor are the qualities that make smartphone-based AR so catastrophically clumsy, in virtually every scenario of use, particularly likely to change over time.
The first is the nature of functionality on the smartphone. As we’ve seen, the smartphone is a platform on which each discrete mode of operation is engaged via a dedicated, single-purpose app. Any attempt at augmenting the environment, therefore, must be actively and consciously invoked, to the exclusion of other useful functionality. The phone, when used to provide such an overlay, cannot also and at the same time be used to send a message, look up an address, buy a cup of coffee, or do any of the other things we now routinely expect of it.
The second reservation is physical. Providing the user with a display surface for graphic annotation of the forward view simply isn’t what the handset was designed to do. It must be held before the eyes like a pane of glass in order for the augmented overlay to work as intended. It hardly needs to be pointed out that this gesture is not one particularly well-suited to the realities of urban experience. It has the doubly unappealing quality of announcing the user’s distraction and vulnerability to onlookers, while simultaneously ensuring that the device is held in the weak grip of the extended arm — a grasp from which it may be plucked with relative ease.
Taken together, these two impositions strongly undercut the primary ostensible virtue of an augmented view, which is its immediacy. The sole genuine justification for AR is the idea that information is simply there, copresent with that you’re already looking at and able to be assimilated without thought or effort.
That sense of effortlessness is precisely what an emerging class of wearable mediators aims to provide for its users. The first artifact of this class to reach consumers is Google’s Glass, which mounts a high-definition, forward-facing camera, a head-up reticle and the microphone required by the natural-language speech recognition interface on a lightweight aluminum frame. While Glass poses any number of aesthetic, practical and social concerns — all of which remain to be convincingly addressed, by Google or anyone else — it does at least give us a way to compare hands-free, head-mounted AR with the handset-based approach.
Would any of the three augmentation scenarios we explored be improved by moving the informational overlay from the phone to a wearable display?
5
A system designed to mitigate my prosopagnosia by recognizing faces for me would assuredly be vastly better when accessed via head-mounted interface; in fact, that’s the only scenario of technical intervention in relatively close-range interpersonal encounters that’s credible to me. The delay and physical awkwardness occasioned by having to hold a phone between us goes away, and while there would still be a noticeable saccade or visual stutter as I glanced up to read your details off my display, this might well be preferable to not being remembered at all.
That is, if we can tolerate the very significant threats to privacy involved, which only start with Google’s ownership of or access to the necessary biometric database. There’s also the question of their access to the pattern of my requests, and above all the one fact inescapably inherent to the scenario: that people are being identified as being present in a certain time and place, without any necessity whatsoever of securing consent on their part. By any standard, this is a great deal of risk to take on, all to lubricate social interactions for 2.5% of the population.
Nearest Subway, as is, wouldn’t be improved by presentation in the line of sight. Given what we’ve observed about the way people really use subway networks, information about the nearest station in a given direction wouldn’t be of any greater utility when splashed on a head-up display than it is on the screen of a phone. Whatever the shortcomings of this particular app, though, they probably don’t imply anything in particular about the overall viability of wearable AR in the role of urban navigation, and in many ways the technology does seem rather well-suited to the wayfinding challenges faced by the pedestrian.
Of the three scenarios considered here, though, it’s AR’s potential to offer novel perspectives on the past of a place that would be most likely to benefit from the wearable approach. We would quite literally see the quotidian environment through the lens of a history superimposed onto it. So equipped, we could more easily plumb the psychogeographical currents moving through a given locale, better understand how the uses of a place had changed over time, or hadn’t. And because this layer of information could be selectively surfaced — invoked and banished via voice command, toggled on or off at will — presenting information in this way might well circumvent the potential for banality through overfamiliarization that haunts even otherwise exemplary efforts like Demnig’s Stolpersteine.
And this suggests something about further potentially productive uses for augmentive mediators like Glass. After all, there are many kinds of information that may be germane to our interpretation of a place, yet effectively invisible to us, and historical context is just one of them. If our choices are shaped by dark currents of traffic and pricing, crime and conviviality, it’s easy to understand the appeal of any technology proposing that these dimensions of knowledge be brought to bear on that which is seen, whether singly or in combination. The risk of bodily harm, whatever its source, might be rendered as a red wash over the field of vision; point-by-point directions as a bright and unmistakable guideline reaching into the landscape. In fact any pattern of use and activity, so long as its traces were harvested by some data-gathering system and made available to the network, might be made manifest to us in this way.
Some proposed uses of mediation are more ambitious still, pushing past mere annotation of the forward view to the provision of truly novel modes of perception — for example, the ability to “see” radiation at wavelengths beyond the limits of human vision, or even to delete features of the visual environment perceived as undesirable[4]. What, then, keeps wearable augmentation from being the ultimate way for networked citizens to receive and act on information?
6
The approach of practical, consumer-grade augmented reality confronts us with a interlocking series of concerns, ranging from the immediately practical to the existential.
A first set of reservations centers on the technical difficulties involved in the articulation of an acceptably high-quality augmentive experience. We’ve so far bypassed discussion of these so we could consider different aspects of the case for AR, but ultimately they’re not of a type that allows anyone to simply wave them away.
At its very core, the AR value proposition subsists in the idea that interactions with information presented in this way are supposed to feel “effortless,” but any such effortlessness would require the continuous (and continuously smooth) interfunctioning of a wild scatter of heterogeneous elements. In order to make good on this promise, a mediation apparatus would need to fuse all of the following elements: a sensitively-designed interface; the population of that interface with accurate, timely, meaningful and actionable information; and a robust, high-bandwidth connection to the networked assets furnishing that information from any point in the city, indoors or out. Even putting questions of interface design to the side, the technical infrastructure capable of delivering the other necessary elements reliably enough that the attempt at augmentation doesn’t constitute a practical and social hazard in its own right does not yet exist — not anywhere in North America, anyway, and not this year or next. The hard fact is that for a variety of reasons having to do with national spectrum policy, a lack of perceived business incentives for universal broadband connectivity, and other seemingly intractable circumstances, these issues are nowhere near being ironed out.
In the context of augmentation, as well, the truth value of representations made about the world acquires heightened significance. By superimposing information directly on its object, AR arrogates to itself a peculiar kind of claim to authority, a claim of a more aggressive sort than that implicit in other modes of representation, and therefore ought to be held to a higher standard of completeness and accuracy[5]. As we saw with Nearest Subway, though, an overlay can only ever be as good as the data feeding it, and the augurs in this respect are not particularly reassuring. Right now, Google’s map of the commercial stretch nearest to my apartment building provides labels for only four of the seven storefront businesses on the block, one of which is inaccurately identified as a restaurant that closed many years ago. If even Google, with all the resources it has at its disposal, struggles to provide its users with a description of the streetscape that is both comprehensive and correct, how much more daunting will other actors find the same task?
Beyond this are the documented problems with visual misregistration[6] and latency that are of over a decade’s standing, and have not been successfully addressed in that time — if anything, have only been exacerbated by the shift to consumer-grade hardware. At issue is the mediation device’s ability to track rapid motions of the head, and smoothly and accurately realign any graphic overlay mapped to the world; any delay in realignment of more than a few tens of milliseconds is conspicuous, and risks causing vertigo, nausea and problems with balance and coordination. The initial release of Glass, at least, wisely shies away from any attempt to superimpose such overlays, but the issue must be reckoned with at some point if useful augmentive navigational applications are ever to be developed.
7
Another set of concerns centers on the question of how long such a mediator might comfortably be worn, and what happens after it is taken off. This is of especial concern given the prospect that one or another form of wearable AR might become as prominent in the negotiation of everyday life as the smartphone itself. There is, of course, not much in the way of meaningful prognostication that can be made ahead of any mass adoption, but it’s not unreasonable to build our expectations on the few things we do know empirically.
Early users of Google’s Glass report disorientation upon removing the headset, after as few as fifteen minutes of use — a mild one, to be sure, and easily shaken off, from all accounts the sort of uneasy feeling that attends staring overlong at an optical illusion. If this represents the outer limit of discomfort experienced by users, it’s hard for me to believe that it would have much impact on either the desirability of the product or people’s ability to function after using it. But further hints as to the consequences of long-term use can be gleaned from the testimony of pioneering researcher Steve Mann, who has worn a succession of ever-lighter and more-capable mediation rigs all but continuously since the mid-1980s. And his experience would seem to warrant a certain degree of caution: Mann, in his own words, early on “developed a dependence on the apparatus,” and has found it difficult to function normally on the few occasions he has been forcibly prevented from accessing his array of devices.
When deprived of his set-up for even a short period of time, Mann experiences “profound nausea, dizziness and disorientation”; he can neither see clearly nor concentrate, and has difficulty with basic cognitive and motor tasks[7]. He speculates that over many years, his neural wiring has adapted to the continuous flow of sensory information through his equipment, and this is not an entirely ridiculous thing to think. At this point, the network of processes that constitutes Steve Mann’s brain — that in some real albeit reductive sense constitutes Steve Mann — lives partially outside his skull.
The objection could be made that this is always already the case, for all of us — that some nontrivial part of everything that make us what we are lives outside of us, in the world, and that Mann’s situation is only different in that much of his outboard being subsists in a single, self-designed apparatus. But if anything, this makes the prospect of becoming physiologically habituated to something like Google Glass still more worrisome. It’s precisely because Mann developed and continues to manage his own mediation equipment that he can balance his dependency on it with the relative freedom of action enjoyed by someone who for the most part is able to determine the parameters under which that equipment operates.
If Steve Mann has become a radically hybridized consciousness, at least he has a legitimate claim to ownership and control over all of the places where that consciousness is instantiated. By contrast, all of the things a commercial product like Glass can do for the user rely on the ongoing provision of a service — and if there’s anything we know about services, it’s that they can be and are routinely discontinued at will, as the provider fails, changes hands, adopts a new business strategy or simply reprioritizes.
8
A final set of strictly practical concerns have to do with the collective experience of augmentation, or what implications our own choice to be mediated in this way might hold for the experience of others sharing the environment.
For all it may pretend to transparency, literally and metaphorically, any augmentive mediator by definition imposes itself between the wearer and the phenomenal world. This, of course, is by no means a quality unique to augmented reality. It’s something AR has in common with a great many ways we already buffer and mediate what we experience as we move through urban space, from listening to music to wearing sunglasses. All of these impose a certain distance between us and the full experiential manifold of the street, either by baffling the traces of it that reach our senses, or by offering us a space in which we can imagine and project an alternative narrative of our actions.
But there’s a special asymmetry that haunts our interactions with networked technology, and tends to undermine our psychic investment in the immediate physical landscape; if “cyberspace is where you are when you’re on the phone,” it’s certainly also the “place” you are when you text or tweet someone while walking down the sidewalk. I’ve generally referred to what happens when someone moves through the city while simultaneously engaged in some kind of remote interaction as a condition of “multiple adjacency,” but of course it’s really no such thing: so far, at least, only one mode of spatial experience can be privileged at a given time. And if it’s impossible to participate fully in both of these realms at once, one of them must lose out.
Watch what happens when a pedestrian first becomes conscious of receiving a call or a text message, the immediate damming they cause in the sidewalk flow as they pause to respond to it. Whether the call is made hands-free or otherwise doesn’t really seem to matter; the cognitive and emotional investment in what transpires in the interface is what counts, and this investment is generally so much greater than it is in the surroundings that street life clearly suffers as a result. The risk inherent in this divided attention appears to be showing up in the relevant statistics in the form of an otherwise hard-to-account-for upturn in accidents involving pedestrian fatalities[8], where such numbers had been falling for years. This is a tendency that is only likely to be exacerbated by augmentive mediation, particularly where content of high inherent emotional involvement is concerned.
9
At this moment in time, it would be hard to exaggerate the appeal the prospect of wearable augmentation holds for its vocal cohort of enthusiasts within the technology community. This fervor can be difficult to comprehend, so long as AR is simply understood to refer to a class of technologies aimed at overlaying the visual field with information about the objects and circumstances in it.
What the discourse around AR shares with other contemporary trans- and posthuman narratives is a frustration with the limits of the flesh, and a frank interest in transcending them through technical means. To advocates, the true appeal of projects like Google’s Glass is that they are first steps toward the fulfillment of a deeper promise: that of becoming-cyborg. Some suggest that ordinary people mediate the challenges of everyday life via complex informational dashboards, much like those first devised by players of World of Warcraft and similar massively multiplayer online role-playing games. The more fervent dream of a day when their capabilities are enhanced far beyond the merely human by a seamless union of organic consciousness with networked sensing, processing, analytic and storage assets.
Beyond the profound technical and practical challenges involved in achieving any such goal, though, someone not committed to one or another posthuman program may find that they have philosophical reservations with this notion, and what it implies for urban life. These may be harder to quantify than strictly practical objections, but any advocate of augmentation technologies who is also interested in upholding the notion of a city as a shared space will have to come to some reckoning with them.
Anyone who cares about what we might call the full bandwidth of human communication — very much including transmission and reception of those cues vital to understanding, but only present beneath the threshold of conscious perception — ought to be concerned about the risk posed to interpersonal exchanges by augmentive mediation. Wearable devices clearly have the potential to exacerbate existing problems of self-absorption and mutual inconsideration[9]. Although in principle there’s no reason such devices couldn’t be designed to support or even enrich the sense of intersubjectivity, what we’ve seen about the technologically-mediated pedestrian’s unavailability to the street doesn’t leave us much room for optimism on this count. The implication is that if the physical environment doesn’t fully register to a person so equipped, neither will other people.
Nor is the body by any means the only domain that the would-be posthuman subject may wish to transcend via augmentation. Subject as it is to the corrosive effects of entropy and time, forcing those occupying it to contend with the inconvenient demands of others, the built environment is another. Especially given current levels of investment in physical infrastructure in the United States, there is a very real risk that those who are able to do so will prefer retreat behind a wall of mediation to the difficult work of being fully present in public. At its zenith, this tendency implies both a dereliction of public space and an almost total abandonment of any notion of a shared public realm. This is the scenario imagined by science-fiction author Vernor Vinge in Rainbows End (2006), in which people interact with the world’s common furniture through branded thematic overlays of their choice; it’s a world that can be glimpsed in the matter-of-factly dystopian videos of Keiichi Matsuda, in which a succession of squalid environments come to life only when activated by colorful augmentive animations.
The most distressing consequences of such a dereliction would be felt by those left behind in any rush toward augmentation. What happens when the information necessary to comprehend and operate an environment is not immanent to that environment, but has become decoupled from it? When signs, directions, notifications, alerts and all the other instructions necessary to the fullest use of the city appear only in an augmentive overlay, and as is inevitably the case, that overlay is available to some but not others[10]? What happens to the unaugmented human under such circumstances? The perils would surely extend beyond a mere inability to act on information; the non-adopter of a particularly hegemonic technology almost always places themselves at jeopardy of being seen as a willful transgressor of norms, even an ethical offender. Anyone forgoing augmentation, for whatever reason, may find that they are perceived as somehow less than a full member of the community, with everything that implies for the right to be and act in public.
The deepest critique of all those lodged against augmented reality is sociologist Anne Galloway’s, and it is harder to answer. Galloway suggests that the discourse of computational augmentation, whether consciously or otherwise, “position[s] everyday places and social interactions as somewhat lacking or in need of improvement.” Again there’s this Greshamization, this sense of a zero-sum relationship between AR and a public realm already in considerable peril just about everywhere. Maybe the emergence of these systems will spur us to some thought as to what it is we’re trying so hard to augment. Philip K. Dick once defined reality as “that which refuses to go away when you stop believing in it,” and it’s this bedrock quality of universal accessibility — to anyone at all, at any time of his or her choosing — that constitutes its primary virtue. If nothing else, reality is the one platform we all share, a ground we can start from in undertaking the arduous and never-comfortable process of determining what else we might agree upon. To replace this shared space with the million splintered and mutually inconsistent realities of individual augmentation is to give up on the whole pretense that we in any way occupy the same world, and therefore strikes me as being deeply inimical to the urban project as I understand it. A city in which the physical environment has ceased to function as a common reference frame is, at the very least, terribly inhospitable soil for democracy, solidarity or simple fellow-feeling to take root in.
It may well be that this concern is overblown. There is always the possibility that augmented reality never will amount to very much, or that after a brief period of consideration it’s actively rejected by the mainstream audience. Within days of the first significant nonspecialist publicity around Google Glass, Seattle dive bar The 5 Point became the first commercial establishment known to have enacted a ban[11] on the device, and if we can fairly judge from the rather pungent selection of terms used to describe Glass wearers in the early media commentary, it won’t be the last. By the time you read these words, these weak signals may well have solidified into some kind of rough consensus, at least in North America, that wearing anything like Glass in public space constitutes a serious faux pas. Perhaps this and similar AR systems will come to rest in a cultural-aesthetic purgatory like that currently occupied by Bluetooth headsets, and if that does turn out to be the case, any premature worry about the technology’s implications for the practice of urban democracy will seem very silly indeed.
But something tells me that none of the objections we’ve discussed here will prove broadly dissuasive, least of all my own personal feelings on the subject. For all the hesitations anybody may have, and for all the vulnerabilities even casual observers can readily diagnose in the chain of technical articulations that produces an augmentive overlay, it is hard to argue against a technology that glimmers with the promise of transcendence. Over anything beyond the immediate near term, some form of wearable augmentive device does seem bound to take a prominent role in returning networked information to the purview of a mobile user at will, and thereby in mediating the urban experience. The question then becomes what kind(s) of urbanity will be produced by people endowed with this particular set of capabilities, individually and collectively, and how we might help the unmediated contend with cities unlike any they have known, enacted for the convenience of the ambiguously transhuman, under circumstances whose depths have yet to be plumbed.

Notes on this section
[1] Grüter T, Grüter M, Carbon CC (2008). “Neural and genetic foundations of face recognition and prosopagnosia”. J Neuropsychol 2 (1): 79–97.
[2] For early work toward this end, see http://www.cc.gatech.edu/~thad/p/journal/augmented-reality-through-wearable-computing.pdf. The overlay of a blinking outline or contour used as an identification cue, incidentally, has long been a staple of science-fictional information displays, showing up in pop culture as far back as the late 1960s. The earliest appearance I can locate is 2001: A Space Odyssey (1968), in which the navigational displays of both the Orion III spaceplane and Discovery itself relied heavily on the trope — this, presumably, because they were produced by the same contractor, IBM. See also Pete Shelley’s music video for “Homosapien” (1981) and the traverse corridors projected through the sky of Blade Runner’s Los Angeles (1982).
[3] As always, I caution the reader that the specifics of products and services, their availability will certainly change over time. All comments here regarding Nearest Subway pertain to v1.4.
[4] See discussion of “Superplonk” in [a later section]. http://m.spectrum.ieee.org/podcast/geek-life/profiles/steve-manns-better-version-of-reality
[5] At the very least, user interface should offer some kind of indication as to the confidence of a proffered identification, and perhaps how that determination was arrived at. See [a later section] on seamfulness.
[6] Azuma, “Registration Errors in Augmented Reality,” 1997.
http://www.cs.unc.edu/~azuma/azuma_AR.html
[7] http://www.nytimes.com/2002/03/14/technology/at-airport-gate-a-cyborg-unplugged.html
[8] See Governors Highway Safety Association, “Spotlight on Highway Safety: Pedestrian Fatalities by State,” 2010. http://www.ghsa.org/html/publications/pdf/spotlights/spotlight_ped.pdf; similarly, a recent University of Utah study found that the act of immersion in a conversation, rather than any physical aspect of use, is the primary distraction while driving and talking on the phone. That hands-free headset may not keep you out of a crash after all. http://www.informationweek.com/news/showArticle.jhtml?articleID=205207840
[9] A story on the New York City-based gossip site Gawker expressed this point of view directly, if rather pungently: “If You Wear Google’s New Glasses, You Are An Asshole.” http://gawker.com/5990395/if-you-wear-googles-new-glasses-you-are-an-asshole
[10] The differentiation involved might be very fine-grained indeed. Users may interact with informational objects that exist only for them and for that single moment.
[11] The first widespread publicity for Glass coincided with Google’s release of a video on Wednesday, 20th February, 2013; The 5 Point announced its ban on 5th March. The expressed concerns center more on the device’s data-collection capability than anything else: according to owner Dave Meinert, his customers “don’t want to be secretly filmed or videotaped and immediately put on the Internet,” and this is an entirely reasonable expectation, not merely in the liminal space of a dive bar but anywhere in the city. See http://news.cnet.com/8301-1023_3-57573387-93/seattle-dive-bar-becomes-first-to-ban-google-glass/
Thought for the day
The notion that the minimally diagnostic criterion of a networked object is that “it knows the right time” is very curious, in that it refers to what may be the primordially alienating regime to which human life is subjected. Is it the case, therefore, that exposure to such objects or abjects cannot help but reinforce an estrangement from the world and from being-in-the-world?
Every user a developer: A brief history, with hopeful branches
Google’s recent announcement of App Inventor is one of those back-to-the-future moments that simultaneously stirs up all kinds of furtive and long-suppressed hope in my heart…and makes me wonder just what the hell has taken so long, and why what we’re being offered is still so partial and wide of the mark.
I should explain. At its simplest, App Inventor does pretty much what it says on the tin. The reason it’s generating so much buzz is because it offers the non-technically inclined, non-coders among us an environment in which we can use simple visual tools to create reasonably robust mobile applications from scratch — in this case, applications for the Android operating system.
In this, it’s another step toward a demystification and user empowerment that had earlier been gestured at by scripting environments like Apple’s Automator and (to a significantly lesser degree) Yahoo! Pipes. But you used those things to perform relatively trivial manipulations on already-defined processes. I don’t want to overstate its power, especially without an Android device of my own to try the results on, but by contrast you use App Inventor to make real, usable, reusable applications, at a time when we understand our personal devices to be little more than a scrim on which such applications run, and there is a robust market for them.
This is radical thing to want to do, in both senses of that word. In its promise to democratize the creation of interactive functionality, App Inventor speaks to an ambition that has largely lain dormant beneath what are now three or four generations of interactive systems — one, I would argue, that is inscribed in the rhetoric of object-oriented programming itself. If functional units of executable code can be packaged in modular units, those units in turn represented by visual icons, and those icons presented in an environment equipped with drag-and-drop physics and all the other familiar and relatively easy-to-grasp interaction cues provided us by the graphical user interface…then pretty much anybody who can plug one Lego brick into another has what it takes to build a working application. And that application can both be used “at home,” by the developer him- or herself, and released into the wild for others to use, enjoy, deconstruct and learn from.
There’s more to it than that, of course, but that’s the crux of what’s at stake here in schematic. And this is important because, for a very long time now, the corpus of people able to develop functionality, to “program” for a given system, has been dwindling as a percentage of interactive technology’s total userbase. Each successive generation of hardware from the original PC onward has expanded the userbase — sometimes, as with the transition from laptops to network-enabled phones, by an order of magnitude or more.
The result, unseemly to me, is that some five billion people on Earth have by now embraced interactive networked devices as an intimate part of their everyday lives, while the tools and languages necessary to develop software for them have remained arcane, the province of a comparatively tiny community. And the culture that community has in time developed around these tools and languages? Highly arcane — as recondite and unwelcoming, to most of us, as a klatsch of Comp Lit majors mulling phallogocentrism in Derrida and the later works of M.I.A.
A further consequence of this — unlooked-for, perhaps, but no less significant for all of that — is that the community of developers winds up having undue influence over how users conceive of interactive devices, and the kinds of things they might be used for. Alan Kay’s definition of full technical literacy, remember, was the ability to both read and write in a given medium — to create, as well as consume. And by these lights, we’ve been moving further and further away from literacy and the empowerment it so reliably entrains for a very long time now.
So an authoring environment that made creation as easy as consumption — especially one that, like View Source in the first wave of Web browsers, exposed something of how the underlying logical system functioned — would be a tremendous thing. Perhaps naively, I thought we’d get something like this with the original iPhone: a latterday HyperCard, a tool lightweight and graphic and intuitive as the device itself, but sufficiently powerful that you could make real things with it.
Maybe that doesn’t mesh with Apple’s contemporary business model, though, or stance regarding user access to deeper layers of device functionality, or whatever shoddy, paternalistic rationale they’ve cooked up this week to justify their locking iOS against the people who bought and paid for it. And so it’s fallen to Google, of all institutions, to provide us with the radically democratizing thing; the predictable irony, of course, is that in look and feel, the App Inventor composition wizard is so design-hostile, so Google-grade that only the kind of engineer who’s already comfortable with more rigorous development alternatives is likely to find it appealing. The idea is, mostly, right…but the execution is so very wrong.
There’s a deeper issue still, though, which is why I say “mostly right.” Despite applauding any and every measure that democratizes access to development tools, in my heart of hearts I actually think “apps” are a moribund way of looking at things. That the “app economy” is a dead end, and that even offering ordinary people the power to develop real applications is something of a missed opportunity.
Maybe that’s my own wishful thinking: I was infected pretty early on with the late Jef Raskin’s way of thinking about interaction, as explored in his book The Humane Interface and partially instantiated in the Canon Cat. What I took from my reading of Raskin is the notion that chunking up the things we do into hard, modal “applications” — each with a discrete user interface, each (still!) requiring time to load, each presenting us with a new learning curve — is kind of foolish, especially when there are a core set of operations that will be common to virtually everything you want to do with a device. Some of this thinking survives in the form of cross-application commands like Cut, Copy and Paste, but still more of it has seemingly been left by the wayside.
There are ways in which Raskin’s ideas have dated poorly, but in others his principles are as relevant as ever. I personally believe that, if those of us who conceive of and deliver interactive experiences truly want to empower a userbase that is now on the order of billions of people, we need to take a still deeper cut at the problem. We need to climb out of the application paradigm entirely, and figure out a better and more accessible way of representing distributed computational processes and how to get information into and out of them. And we need to do this now, because we can clearly see that those interactive experiences are increasingly taking place across and between devices and platforms — at first for those of us in the developed world, and very soon now, for everyone.
In other words, I believe we need to articulate a way of thinking about interactive functionality and its development that is appropriate to an era in which virtually everyone on the planet spends some portion of their day using networked devices; to a context in which such devices and interfaces are utterly pervasive in the world, and the average person is confronted with a multiplicity of same in the course of a day; and to the cloud architecture that undergirds that context. Given these constraints, neither applications nor “apps” are quite going to cut it.
Accordingly, in my work at Nokia over the last two years, I’ve been arguing (admittedly to no discernible impact) that as a first step toward this we need to tear down the services we offer and recompose them from a kit of common parts, an ecology of free-floating, modular functional components, operators and lightweight user-interface frameworks to bind them together. The next step would then be to offer the entire world access to this kit of parts, so anyone at all might grab a component and reuse it in a context of their own choosing, to develop just the functionality they or their social universe require, recognize and relate to. If done right, then you don’t even need an App Inventor, because the interaction environment itself is the “inventor”: you grab the objects you need, and build what you want from them.
One, two, many Facebooks. Or Photoshops. Or Tripits or SketchUps or Spotifys. All interoperable, all built on a framework of common tools, all producing objects in turn that could be taken up and used by any other process in the weave.
This approach owes something to Ben Cerveny’s seminal talk at the first Design Engaged, though there he was primarily concerned with semantically-tagged data, and how an ecosystem of distributed systems might make use of it. There’s something in it that was first sparked by my appreciation of Jun Rekimoto’s Data Tiles, and it also has some underlying assumptions in common with the rhetoric around “activity streams.” What I ultimately derive from all of these efforts is the thought that we (yes: challenge that “we”) ought to be offering the power of ad-hoc process definition in a way that any one of us can wrap our heads around, which would in turn underwrite the most vibrant, fecund/ating planetary ecosystem of such processes.
In this light, Google’s App Inventor is both a wonderful thing, and a further propping-up of what I’m bound to regard as a stagnating and unhelpful paradigm. I’m both excited to see what people do with it, and more than a little saddened that this is still the conversation we’re having, here in 2010.
There is one further consideration for me here, though, that tends to soften the blow. Not that I’m at all comparing myself to them, in the slightest, but I’m acutely aware of what happens to the Ted Nelsons and Doug Engelbarts of the world. I’ve seen what comes of “visionaries” whose insight into how things ought to be done is just that little bit too far ahead of the curve, how they spend the rest of their careers (or lives) more or less bitterly complaining about how partial and unsatisfactory everything that actually does get built turned out to be. If all that happens is that App Inventor and its eventual, more aesthetically well-crafted progeny do help ordinary people build working tools, firmly within the application paradigm, I’ll be well pleased — well pleased, and no mistake. But in some deeper part of me, I’ll always know that we could have gone deeper still, taken on the greater challenge, and done better by the people who use the things we make.
We still can.
My back pages: Spimed
Originally published 17 October 2004 on my old v-2.org site. Very, very interesting for me to see how my feelings have evolved, and where they remain consistent; there are probably as many instances of the former as of the latter. Plus, all those “Sterlings” now feel so stilted and formal and unnatural. (Hi, Bruce!) At any rate: enjoy.
If spam simply isn’t annoying enough to suit your needs, or you’re the kind of person who’s disappointed by the disarming ease you encounter when upgrading your laptop’s operating system to a new version, then boy does Bruce Sterling ever have a vision of the future for you.
Refining the message of his much-linked speech from this year’s SIGGRAPH conference in a new piece for Wired, Sterling draws us a picture of a coming time when intelligent, deeply internetworked and self-authenticating objects dominate the physical world: an “expensive, fussy, fragile, hopelessly complex” world, where entirely new forms of “theft, fraud [and] vandalism” await us.
I preface my comments the way I do because Sterling isn’t warning us about this world. He’s enthusing about it.
To some degree, in the SIGGRAPH speech, Sterling’s thrown us a definitional curveball. Having previously defined a “blobject” as an artifact of digital creation “with a curvilinear, flowing design, such as the Apple iMac computer and the Volkswagen Beetle,” he now asks us to step back a level of abstraction, and understand the word instead to mean an object that contains its own history digitally. Possibly realizing that this bait-and-switch presents abundant opportunities for confusion, he rescues himself at the last moment by substituting for “blobject” a new coinage, “spime”: “an object tracked precisely in space and time.”
And then he proceeds to imagine a world in which this self-documenting, self-tracking, self-extending stuff he calls spime dominates utterly, or is allowed to become utterly dominant. (Whatever one thinks of this particular coinage and its descriptive utility, there clearly was the need for a word here. As Sterling quite correctly points out, this is a class of objects without precedent in human history.)
put this product into service
I have a lot to say about the notion of such chimeric object/product/service hybrids, both because I think Sterling’s onto something important and real, and because the direction he takes it in worries me.
He’s got unusually fine and sensitive antennae; as a novelist, fabulist, extrapolator, raconteur and ranter, he’s terrific. But as a designer and an organiser of design, oh…let’s just say Sterling’s taste leaves something to be desired. So when he starts talking about “an imperial paradigm…a weltanschauung and a grand schemata [sic],” for designed objects, my ears perk up.
And it’s when he suggests that we have little choice but to prepare ourselves for a world of
– spime spam (vacuum cleaners that bellow ads for dust bags);
– spime-owner identity theft, fraud, malware, vandalism, and pranks;
– organized spime crime;
– software faults that make even a mop unusable;
– spime hazards (kitchens that fry the unwary, cars that drive off bridges);
– unpredictable emergent forms of networked spime behavior;
– objects that once were inert and are now expensive, fussy, fragile, hopelessly complex, and subversive of established values…”
that I begin to get truly uncomfortable.
It’s not that Sterling’s identified the hazards improperly. Just the opposite: these are precisely (some of) the unpleasant eventualities we need to plan for in any setting of pervasive or ubiquitous “intelligence” (and which I discuss in an forthcoming article entitled “All watched over by machines of loving grace“). [Note: This article was essentially the genesis of Everyware.]
The problem is that he appears to be suggesting that “cop[ing] with” these headaches is about all that we can do, so obvious is the superiority of spime, and so inevitable its hegemony. Locked into technological determinism, he does little to challenge this here, beyond suggesting that, oh yeah, now that you mention it, this “imperial paradigm” might not necessarily be maximally convenient for its human subjects. This “ideal technology for concentration camps, authoritarian regimes, and prisons” is, yes, “a hassle. An enormous hassle.” But relax: “[I]t’s a fruitful hassle.”
With his unusually acute vision, Sterling can see something like this looming on the horizon and still be so cavalier as to suggest that, if we can only “cope with” these “hassles,” “spimes will be a massive improvement over the present closed, blind regime.” (Haven’t we heard all this better-living-through-chemistry noise before?) Such a stance strikes me as a not inconsiderable abdication of the role of anyone gifted with foresight. (It also strikes me as presuming a parallel abdication among designers, but we’ll get to that in a bit.)
It’s frustrating because I share, almost without exception, Sterling’s larger goals. He simply wants to save humanity from itself, from a situation in which we seem hellbent on drowning ourselves and whatever posterity we may achieve in tidal surges of our own noxious effluvia, and he’s looking for any help he can get from the technical side of the house. I get this, the essential good will undernetting the vision of spime.
But while I share a lot of Sterling’s faith in the ferment of human creativity, I’m not nearly as comfortable as he is with assuming that the results will always be “fruitful.”
the user and the used
I derive my suspicions not a little bit from what I know of the history of open-source software, in which applications that should by rights dominate their respective niches for their robustness or power or utility fail time and again to find the wider audience they deserve. I lay a lot of this to their user interfaces, which, designed by geeks for geeks as they are, almost invariably fail any other kind of user. The distributed nature of open-source creation seems to militate against the consistency required for a smooth, consumer-grade user experience.
Of course, one might point out that this inconsistency is inevitably implied in the core logic of open-source development, or anything like it: that notions of highly crafted user interfaces and content architectures are just so many farty, self-indulgent Rick Wakeman solos, bound to be cut down before the whirling DIY thresher of the new mutant thing.
Unless I’m badly mistaken, from what I’m able to gather from two decades of reading him this stance seems to capture something of Sterling’s position — that he doesn’t have much room for designers, mewling pitifully from the sidelines in all the impotence of their top-down, command-and-control obsolescence. Technology is destiny. The street will find its own uses; do what thou wilt shall be the whole of the law; great shall be the rejoicing.
It’s a weird thing to find myself on this side of history, given my other interests, and I’m not sure but that it may be a strategic mistake to even accept this framing of things, but here I go:
I do not believe that we want to live in a world where the best we can hope for is “wrangling” a surge of fast, cheap, out-of-control, autocatalytic blobjects. I simply do not believe that what we give up is worth less than what we are promised, even if what we are promised is delivered in anything close to full.
Control isn’t all DRM, you know. Control also means design with compassion, which is something whose complexities I believe we are just beginning to get a handle on. Control also means permitting (some) introduction of randomness in the service of a defined end. And for sure it means getting out ahead of foreseeable problems and taking measures to prevent their emergence.
To surrender this measure of control — to insist that all bottom-up, all the time is any kind of a path to a better world, and that all we can or should do is get out of the way — is fatuous, even negligent. (Indeed, “allowing otherwise avoidable dangers to manifest” defines negligence in the Anglo-American jurisprudential tradition.) Just in the last ten minutes, as I’m writing this, a correspondent tells me that an SMS-based survey inquiring as to who users believed the 100 Greatest South Africans to be had to be abandoned by its originators because the notorious fascist Eugene Terreblanche popped out at the top.
Importantly, I don’t believe that Bruce Sterling believes any such thing, either. I don’t think for a moment that he would propose that we accept, or accept himself, a situation in which people gave up all control over the things we build.
I just know, all too well, what happens to nuanced distinctions in the wild.
i contradict myself/i contain multitudes
Let me also take this opportunity to problematize even the notion that an object can usefully contain its own history. It’s a fetching, even an intoxicating idea, and you can easily see how all the ways in which such a thing might be desirable. But whose history are we talking about, exactly?
Nurri’s work on the New York Public Library’s African-American Migration Experience project provides us with a nice capsule illustration of some of the problems involved when an item is recursively accompanied by descriptive information as it travels down through time. One of her responsibilities at the Digital Library is verifying that archival images have accurate metadata, fields describing the contents of an image.
Imagine that she’s come across a picture from 1920’s Strivers’ Row, with a scrawled annotation on the back of it: “Some prominent local Negroes.” (This is not at all an atypical example.) An accurate provision of metadata, of course, requires transcribing the contemporaneous description word for word. But obviously, “prominent Negroes” is not going to fly as an object descriptor in 2004 — and nor should it, less from any feeling of political correctness (though there is that) than from the simple reason that few in 2004 are likely to search a database using the keyword “negroes,” unless it’s in a context like “Negro League baseball.”
And here the infinite regress beckons. Say you append both contemporary and historical tags to the image: “Images – Harlem – African-Americans” and “Caption – 1927 – ‘Some prominent local Negroes'”. You may have covered the obvious bases, but that’s nothing like a full history. To ensure the full understanding of someone arriving at the object from some context external in space, time, or both, you would also have to include information about the evolution of the English language and the society in which it’s used, just to explain why the 1927 label wasn’t considered appropriate a mere seventy-five years later. You see where this is going? (Sterling himself points out that “[o]nce we tag many things, we will find that there is no good place to stop tagging.”)
Sure, memory is cheap, and will be cheaper. It’s not storing such a bottomless effusion of autodescription that I’m concerned about. It’s how useful this metadata will be, any of it, when its reliability will be hard to gauge – when different parts of an object’s record, introduced at different junctures in “space and time,” may well have differing degrees of reliability, and little way to distinguish between them!
We know from the Web and from various p2p applications that, in the wild, metadata is close to useless because it can be gamed so easily; as a result, no credible search engine relies on it nor has done so for years. Is that really the new Metallica single, or is it five minutes of Lars Ulrich telling you to go fuck yourself? Is that really a captive about to be decapitated by Islamists, or is it a commercial for a crappy movie you never would have clicked on had it represented itself honestly? (Who has the authority to append metadata? Who has the responsibility, or even the technical wherewithal, to verify it?) I’m surprised that someone as savvy as Bruce doesn’t seem to grasp the implications of this for spime.
unspiming
I believe, with Bruce Sterling, that some watershed is fast approaching, past which ordinary objects will be endowed with such information-sensing ( -processing, -storage, -synthesis and -retransmission) power that both the way we understand them and the very language with which we refer to them will need to change.
Where I part ways with him, however, is in my belief that we don’t have to meekly bend over and try to “cope with” the negative consequences of any such development. As Lawrence Lessig rightly reminds us, in the destiny of any designed system, some possibilities are locked in, and others forestalled, at the level of architecture. And fortunately for all of us, when asked to submit to regimes of antihuman banality, some designers have historically had other ideas.
I can do little more than hope that this will always be the case: that those people endowed with the ability to see what’s coming over the horizon not merely describe what reaches their senses, but actively intervene to forestall the worst contingencies arising. Such an undertaking requires care and insight and discretion beyond that which we ordinarily display — myself as much as anyone else — but I firmly believe that we can choose our futures rather than have them imposed on us. In this season of decision, it is clear that in more ways than one, such a moment is now upon us.
If you want a closer look at the “spime metadata” I ginned up to serve as an illustration of this piece, it’s downloadable as a PDF here. It’s intended to represent the self-description (at time of first consumer purchase) of a notional Nike-brand t-shirt.
Join us in Helsinki on May 22nd for a Touchscapes workshop (updated)
Just in case folks here in town are feeling neglected, fear not: we’re doing events here as well.
As part of Helsinki’s World Design Capital 2010 Ideas Forum, and collaboration with our good friends at Nordkapp, I’m delighted to announce a workshop called “Touchscapes: Toward the next urban ecology.”
Touchscapes is inspired, in large part, by our frustration with the Symbicon/ClearChannel screens currently deployed around Helsinki, how little is being done with them, and how far short of their potential they’ve fallen. Our sense is that we are now surrounded by screens as we move through the city — personal devices, shared interactive surfaces, and now even building-sized displays — and if thinking about how to design for each of these things individually was hard enough, virtually nobody has given much thought to how they function together, as a coherent informational ecosystem.
Until now, that is, because that’s just what we aim to do in the workshop. Join us for a day of activity dedicated to understanding the challenges presented by this swarm of screens, the possibilities they offer for tangible, touch-based interaction, and their implications for the new urban information design. We’ll move back and forth between conceptual thinking and practical doing, developing solid ideas about making the most meaningful use of these emerging resources culturally, commercially, personally and socially.
Attendance is free, but spaces in the workshop are limited, so I recommend you sign up at Nordkapp on the Facebook event page as soon as you possibly can. See you on the 22nd!
Being discussed now