Archive | Know your enemy RSS for this section

Weighing the pros and cons of driverless cars, in context

Consider the driverless car, as currently envisioned by Google.

That I can tell, anyway, most discussion of its prospects, whether breathlessly anticipatory or frankly horrendified, is content to weigh it more or less as given. But as I’m always harping on about, I just don’t believe we usefully understand any technology in the abstract, as it sits on a smoothly-paved pad in placid Mountain View. To garner even a first-pass appreciation for the contours of its eventual place in our lives, we have to consider what it would work like, and how people would experience it, in a specified actual context. And so here — as just such a first pass, at least — I try to imagine what would happen if autonomous vehicles like those demo’ed by Google were deployed as a service in the place I remain most familiar with, New York City.

The most likely near-term scenario is that such vehicles would be constructed as a fleet of automated taxicabs, not the more radical and frankly more interesting possibility that the service embracing them would be designed to afford truly public transit. The truth of the matter is that the arrival of the technological capability bound up in these vehicles begins to upend these standing categories…but the world can only accommodate so much novelty at once. The vehicle itself is only one component of an distributed actor-network dedicated to the accomplishment of mobility; when the autonomous vehicle begins to supplant the conventional taxi, that whole network has to restabilize around both the vehicle’s own capabilities and the ways in which those capabilities couple with other, existing actors.

In this case, that means actors like the Taxi and Limousine Commission. Enabling legislation, a body of suitable regulation, a controlling legal authority, the agreement on procedures for assessing liability to calibrate the furnishment of insurance: all of these things will need to be decided upon before any such thing as the automation of surface traffic in New York City can happen. And these provisions have a conservative effect. During the elapse of some arbitrary transitional period, anyway, they’ll tend to drag this theoretically disruptive actor back toward the categories we’re familiar with, the modes in which we’re used to the world working. That period may last months or it may last decades; there’s just no way of knowing ahead of time. But during this interregnum, we’ll approach the new thing through interfaces, metaphors and other linkages we’re already used to.

Automated taxis, as envisioned by designer Petr Kubik
Automated taxis, as envisioned by designer Petr Kubik.

So. What can we reasonably assert of a driverless car on the Google model, when such a thing is deployed on the streets and known to its riders as a taxi?

On the plus side of the ledger:
– Black men would finally be able to hail a cab in New York City;
– So would people who use wheelchairs, folks carrying bulky packages, and others habitually and summarily bypassed by drivers;
– Sexual harassment of women riding alone would instantly cease to be an issue;
– You’d never have a driver slow as if to pick you up, roll down the window to inquire as to your destination, and only then decide it wasn’t somewhere they felt like taking you. (Yes, this is against the law, but any New Yorker will tell you it happens every damn day of the week);
– Similarly, if you happen to need a cab at 4:30, you’ll be able to catch one — getting stuck in the trenches of shift change would be a thing of the past;
– The eerily smooth ride of continuous algorithmic control will replace the lurching stop-and-go style endemic to the last few generations of NYC drivers, with everything that implies for both fuel efficiency and your ability to keep your lunch down.

These are all very good things, and they’d all be true no matter how banjaxed the service-design implementation turns out to be. (As, let’s face it, it would be: remember that we’re talking about Google here.) But as I’m fond of pointing out, none of these very good things can be had without cost. What does the flipside of the equation look like?

- Most obviously, a full-fleet replacement would immediately zero out some 50,000 jobs — mostly jobs held by immigrants, in an economy with few other decent prospects for their employment. Let’s be clear that these, while not great jobs (shitty hours, no benefits, physical discomfort, occasionally abusive customers), generate a net revenue that averages somewhere around $23/hour, and this at a time when the New York State minimum wage stands at $8/hour. These are jobs that tie families and entire communities together;
– The wholesale replacement of these drivers would eliminate one of the very few remaining contexts in which wealthy New Yorkers encounter recent immigrants and their culture at all;
– Though this is admittedly less of an issue in Manhattan, it does eliminate at least some opportunity for drivers to develop and demonstrate mastery and urban savoir faire;
– It would give Google, an advertising broker, unparalleled insight into the comings and goings of a relatively wealthy cohort of riders, and in general a dataset of enormous and irreplicable value;
– Finally, by displacing alternatives, and over the long term undermining the ecosystem of technical capabilities, human competences and other provisions that undergirds contemporary taxi service, the autonomous taxi would in time tend to bring into being and stabilize the conditions for its own perpetuation, to the exclusion of other ways of doing things that might ultimately be more productive. Of course, you could say precisely the same thing about contemporary taxis — that’s kind of the point I’m trying to make. But we should see these dynamics with clear eyes before jumping in, no?

I’m sure, quite sure, that there are weighting factors I’ve overlooked, perhaps even obvious and significant ones. This isn’t the whole story, or anything like it. There is one broadly observable trend I can’t help but noticing, however, in all the above: the benefits we stand to derive from deploying autonomous vehicles on our streets in this way are all felt in the near or even immediate term, while the costs all tend to be circumstances that only tell in the fullness of time. And we haven’t as a species historically tended to do very well with this pattern, the prime example being our experience of the automobile itself. It’s something to keep in mind.

There’s also something to be gleaned from Google’s decision to throw in their lot with Uber — an organization explicitly oriented toward the demands of the wealthy and boundlessly, even gleefully, corrosive of the public trust. And that is that you shouldn’t set your hopes on any mobility service Google builds on their autonomous-vehicle technology ever being positioned as the public accommodation or public utility it certainly could be. The decision to more tightly integrate Uber into their suite of wayfinding and journey-planning services makes it clear that for Google, the prerogative to maximize return on investment for a very few will always outweigh the interests of the communities in which they operate. And that, too, is something to keep in mind, anytime you hear someone touting all of the ways in which the clean, effortless autotaxi stands to resculpt the city.

Beacons, marketing and the neoliberal logic of space, or: The Engelbart overshoot

If you’ve been reading this blog for any particular length of time, or have tripped across my writing on the Urbanscale site or elsewhere, you’ve probably noticed that I generally insist on discussing the ostensible benefits of urban technology at an unusually granular level. (In fact, I did this just yesterday, in my responses to questions put to me by Korea’s architectural magazine SPACE.) I’ll want to talk about specific locales, devices, instances and deployments, that is, rather than immediately hopping on board with the wide-eyed enthusiasm for generic technical “innovation” in cities that seems near-universal at our moment in history.

My point in doing so is that we can’t really fairly assess a value proposition, or understand the precise nature of the trade-offs bound up in a given deployment of technology, until we see what people make of it in the wild, in a specific locale. The canonical example of the perils that attend the overly generic consideration of a technology is bus rapid transit, or BRT, which works very, very well indeed on sociophysical terrain that strongly resembles its original home of Curitiba, and much less so in low-density environments like Johannesburg, or in places where, for whatever reason, access to the right-of-way can’t be controlled, notably Delhi and New York City. BRT was sold to these latter municipalities as a panacea for problems of urban mobility, without reference to all of the spatial, social, regulatory, pricing-model and service-design elements that had to be brought into balance before anything like success could be declared, and it shows. (Boy howdy, does it show. Have you ridden the New York City MTA’s half-assed instantiation of BRT lately?)

And if anything, information technology is even more sensitively dependent on factors like these. The choice of one touchscreen technology (form factor, operating system, service provider, register of language…) over another very often turns out to determine the success or failure of a given proposition.

But despite all this, sometimes it is possible for the careful observer to suss out the likely future contours of a technology’s adoption, based on a more general appreciation of its nature. And that’s why I want to take a little time today to discuss with you my thinking around the emergent class of low-power, low-range transmitters known as “beacons.”

Classically, of course, a “beacon” was a visually prominent effect of some sort, designed to notify or warn those encountering it of some otherwise indistinct condition or feature in the landscape. And perhaps as originally envisioned, this class of transmitters genuinely was supposed to be what it said on the tin: a simple way for relatively low-powered devices to find and lock onto one another, amid the fog and unpredictable dynamism of the everyday.

This is not a particularly new idea; as long ago as 2005, I’d proposed on my old v-2 site that networked objects would need some lightweight, low-cost way of radiating information about their presence and capabilities to other things (and by extension, people) in the near neighborhood — the foundation of what, at that time, I thought of as a “universal service-discovery layer” draped over the world. And of course I was nowhere near the first to have proposed something along these lines; I myself had been inspired to think more deeply about things talking to each other from a sideways reading of a throw-away bit of cleverness in Bruce Sterling’s 1998 novel Distraction, and it’s fair to say that the idea of things automatically broadcasting their identity to other things had been in the air for quite a few years before that.

But in evolving commercial parlance, beacons are nothing of the sort, really. A contemporary beacon (like these ugly and rather hostile-looking blebs, sold by Estimote) is primarily designed to capture information, not to convey it — and such information as it does convey outward is disproportionately intended to benefit the sender over the recipient. So my first objection to beacon technology is that this very framing is in itself mendacious, dishonest and misleading. (You know you’re in trouble when the very name of something is a lie.)

As things stand now, beacons are intended for one purpose, and one purpose alone: to capture and monetize your behavior. As with the so-called Internet of Things more broadly, there simply aren’t any particularly convincing or compelling use cases for the technology that aren’t about driving needless consumption; almost without exception, those that are even partially robust have to do with closing a commercial transaction. Both the language of beacon technology and the framework of assumptions it grows out of are airlessly, claustrophobically hegemonic, and this thinking is all over their sites: vendors urge you to deploy these “media-rich banner ads for the physical world” in “any physical place, such as your retail store,” to “drive engagement,” “cross-sell and up-sell” and eventually “convert” passersby to purchasers. Even beacon advocates have a hard time coming up with any more than half-hearted art projects by way of uses for the technology that are not founded in the desire to relieve some passing mark of the contents of their wallet, reliably, predictably and on an ongoing basis.

And even those scenarios of use which appear at first blush to be founded in blamelessly humanitarian ends, when subjected to trial by ordeal ultimately turn out to embrace the shabbiest neoliberal reasoning. Cheaper to spackle a subway station with networked microlocation transponders, goes the thinking, than to actually hire and train the (unpredictable, and damnably needy) human beings that might help riders navigate the corridors and interchange nodes. Even if the devices don’t actually turn out to work all that reliably in the fullness of time, or impose a starkly higher TCO than initially estimated, there will be a concrete deployment that someone can point to as an accomplishment, a ticked-off achievement and a justification for renewed budgetary allocation or re-election.

Finally, I find it noteworthy that the beacon cost-benefit proposition can only subsist when it is accomplished stealthily, and when it is presented to citizens forthrightly and transparently, it is just as forthrightly rejected. Perhaps it’s a temporary blip of post-Snowden reticence, but my sense is that most of us have become chary of bundling too many performative dimensions of our identity onto our converged devices at once, and not at all without reason. (Ultimately, I diagnose similar reasons underneath the failure to date of digital wallets and similar device-based payment solutions to gain any market traction whatsoever, though there are other questions at play there as well.)

Beyond and back

The interest in beacons strikes me as being symptomatic of something deeper and more troubling in the culture of technology, something I think of as “the Engelbart overshoot.”

There was a powerful dream that sustained (and not incidentally, justified) half a century’s inquiry into the possibilities of information technology, from Vannevar Bush to Doug Engelbart straight through to Mark Weiser. This was the dream of augmenting the individual human being with instantaneous access to all knowledge, from wherever in the world he or she happened to be standing at any given moment. As toweringly, preposterously ambitious as that goal seems when stated so baldly, it’s hard to conclude anything but that we actually did achieve that dream some time ago, at least as a robust technical proof of concept.

We achieved that dream, and immediately set about betraying it. We betrayed it by shrouding the knowledge it was founded on in bullshit IP law, and by insisting that every interaction with it be pushed through some set of mostly invidious business logic. We betrayed it by building our otherwise astoundingly liberatory propositions around walled gardens and proprietary standards, by putting the prerogatives of rent-seeking ahead of any move to fertilize and renew the commons, and by tolerating the infestation of our informational ecology with vile, value-destroying parasites. These days technical innovators seem more likely to be lauded for devising new ways to harness and exploit people’s life energy for private gain than for the inverse.

In fact, you and I now draw breath in a post-utopian world — a world where the tide of technical idealism has long receded from its high-water mark, where it’s a matter of course to suggest that we must attach (someone’s) networked sensors to our bodies in order to know them, and where, rather astonishingly, it is possible for an intelligent person to argue that spamming the globe with such devices is somehow a precondition of “reclaim[ing our] environment as a place of sociability and creativity.” And this is the world in which beacons and the cause of advocacy for them arise.

There’s very little meaningful for this technology to do — no specifiable aim or goal that genuinely seems to require its deployment, which could not be achieved as or more readily in some other way. As presently constituted, anyway, it doesn’t serve the great dream of aiding us in our lifelong effort to make sense of the endlessly confounding and occasionally dangerous world. It furthers only the puniest and most shaming of ambitions. To the talented, technically capable folks working so hard to build out the beacon world, I ask: Is this really what you want to spend any part of your only life on Earth working to develop? To those advocating this turn, I ask: Can’t you think of any way of relating to people more interesting and productive than trying to sell them something they neither want nor need, and most likely cannot genuinely afford?

It doesn’t take too concerted an intellectual effort to understand what’s really going on with beacons — as a matter of fact, as we’ve seen, most people evidently seem to understand the situation perfectly well already. But I don’t hold out too much hope of getting any of the truly convinced to see the light on this question; we all know how very difficult it can be to get people to understand something when their salary (mortgage payments/kids’ private-school tuition/equity stake/deal flow) depends on them not understanding it. If you ask me, though, we were meant for better things than this.

“Against the smart city” teaser

The following is section 4 of “Against the smart city,” the first part of The City Is Here For You To Use. Our Do projects will be publishing “Against the smart city” in stand-alone POD pamphlet and Kindle editions later on this month.

UPDATE: The Kindle edition is now available for purchase.

4 | The smart city pretends to an objectivity, a unity and a perfect knowledge that are nowhere achievable, even in principle.

Of the major technology vendors working in the field, Siemens makes the strongest and most explicit statement[1] of the philosophical underpinnings on which their (and indeed the entire) smart-city enterprise is founded: “Several decades from now cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service…The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”

We’ve already considered what kind of ideological work is being done when efforts like these are positioned as taking place in some proximate future. The claim of perfect competence Siemens makes for its autonomous IT systems, though, is by far the more important part of the passage. It reflects a clear philosophical position, and while this position is more forthrightly articulated here than it is anywhere else in the smart-city literature, it is without question latent in the work of IBM, Cisco and their peers. Given its foundational importance to the smart-city value proposition, I believe it’s worth unpacking in some detail.

What we encounter in this statement is an unreconstructed logical positivism, which, among other things, implicitly holds that the world is in principle perfectly knowable, its contents enumerable, and their relations capable of being meaningfully encoded in the state of a technical system, without bias or distortion. As applied to the affairs of cities, it is effectively an argument there is one and only one universal and transcendently correct solution to each identified individual or collective human need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something which can be encoded in public policy, again without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)

Every single aspect of this argument is problematic.

Perfectly knowable, without bias or distortion: Collectively, we’ve known since Heisenberg that to observe the behavior of a system is to intervene in it. Even in principle, there is no way to stand outside a system and take a snapshot of it as it existed at time T.

But it’s not as if any of us enjoy the luxury of living in principle. We act in historical space and time, as do the technological systems we devise and enlist as our surrogates and extensions. So when Siemens talks about a city’s autonomous systems acting on “perfect knowledge” of residents’ habits and behaviors, what they are suggesting in the first place is that everything those residents ever do — whether in public, or in spaces and settings formerly thought of as private — can be sensed accurately, raised to the network without loss, and submitted to the consideration of some system capable of interpreting it appropriately. And furthermore, that all of these efforts can somehow, by means unspecified, avoid being skewed by the entropy, error and contingency that mark everything else that transpires inside history.

Some skepticism regarding this scenario would certainly be understandable. It’s hard to see how Siemens, or anybody else, could avoid the slippage that’s bound to occur at every step of this process, even under the most favorable circumstances imaginable.

However thoroughly Siemens may deploy their sensors, to start with, they’ll only ever capture the qualities about the world that are amenable to capture, measure only those quantities that can be measured. Let’s stipulate, for the moment, that these sensing mechanisms somehow operate flawlessly, and in perpetuity. What if information crucial to the formulation of sound civic policy is somehow absent from their soundings, resides in the space between them, or is derived from the interaction between whatever quality of the world we set out to measure and our corporeal experience of it?

Other distortions may creep into the quantification of urban processes. Actors whose performance is subject to measurement may consciously adapt their behavior to produce metrics favorable to them in one way or another. For example, a police officer under pressure to “make quota” may issue citations for infractions she would ordinarily overlook; conversely, her precinct commander, squeezed by City Hall to present the city as an ever-safer haven for investment, may downwardly classify[2] felony assault as a simple misdemeanor. This is the phenomenon known to viewers of The Wire as “juking the stats[3],” and it’s particularly likely to happen when financial or other incentives are contingent on achieving some nominal performance threshold. Nor is it the only factor likely to skew the act of data collection; long, sad experience suggests that the usual array of all-too-human pressures will continue to condition any such effort. (Consider the recent case in which Seoul Metro operators were charged with using CCTV cameras to surreptitiously ogle women passengers[4], rather than scan platforms and cars for criminal activity as intended.)

What about those human behaviors, and they are many, that we may for whatever reason wish to hide, dissemble, disguise, or otherwise prevent being disclosed to the surveillant systems all around us? “Perfect knowledge,” by definition, implies either that no such attempts at obfuscation will be made, or that any and all such attempts will remain fruitless. Neither one of these circumstances sounds very much like any city I’m familiar with, or, for that matter, would want to be.

And what about the question of interpretation? The Siemens scenario amounts to a bizarre compound assertion that each of our acts has a single salient meaning, which is always and invariably straightforwardly self-evident — in fact, so much so that this meaning can be recognized, made sense of and acted upon remotely, by a machinic system, without any possibility of mistaken appraisal.

The most prominent advocates of this approach appear to believe that the contingency of data capture is not an issue, nor is any particular act of interpretation involved in making use of whatever data is retrieved from the world in this way. When discussing their own smart-city venture, senior IBM executives[5] argue, in so many words, that “the data is the data”: transcendent, limpid and uncompromised by human frailty. This mystification of “the data” goes unremarked upon and unchallenged not merely in IBM’s material, but in the overwhelming majority of discussions of the smart city. But different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few meters. Perceptions of risk in a neighborhood can be transformed by altering the taxonomy used to classify reported crimes ever so slightly[6]. And anyone who’s ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey. The fact is that the data is never “just” the data, and to assert otherwise is to lend inherently political and interested decisions regarding the act of data collection an unwonted gloss of neutrality and dispassionate scientific objectivity.

The bold claim of perfect knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it’s astonishing that anyone would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful.

One and only one solution: With their inherent, definitional diversity, layeredness and complexity, we can usefully think of cities as tragic. As individuals and communities, the people who live in them hold to multiple competing and equally valid conceptions of the good, and it’s impossible to fully satisfy all of them at the same time. A wavefront of gentrification can open up exciting new opportunities for young homesteaders, small retailers and craft producers, but tends to displace the very people who’d given a neighborhood its character and identity. An increased police presence on the streets of a district reassures some residents, but makes others uneasy, and puts yet others at definable risk. Even something as seemingly straightforward and honorable as an anticorruption initiative can undo a fabric of relations that offered the otherwise voiceless at least some access to local power. We should know by now that there are and can be no[7] Pareto-optimal solutions for any system as complex as a city.

Arrived at algorithmically: Assume, for the sake of argument, that there could be such a solution, a master formula capable of resolving all resource-allocation conflicts and balancing the needs of all a city’s competing constituencies. It certainly would be convenient if this golden mean could be determined automatically and consistently, via the application of a set procedure — in a word, algorithmically.

In urban planning, the idea that certain kinds of challenges are susceptible to algorithmic resolution has a long pedigree. It’s already present in the Corbusian doctrine that the ideal and correct ratio of spatial provisioning in a city can be calculated from nothing more than an enumeration of the population, it underpins the complex composite indices of Jay Forrester’s 1969 Urban Dynamics[8], and it lay at the heart of the RAND Corporation’s (eventually disastrous) intervention in the management of 1970s New York City[9]. No doubt part of the idea’s appeal to smart-city advocates, too, is the familial resemblance such an algorithm would bear to the formulae by which commercial real-estate developers calculate air rights, the land area that must be reserved for parking in a community of a given size, and so on.

In the right context, at the appropriate scale, such tools are surely useful. But the wholesale surrender of municipal management to an algorithmic toolset — for that is surely what is implied by the word “autonomous” — would seem to repose an undue amount of trust in the party responsible for authoring the algorithm. At least, if the formulae at the heart of the Siemens scenario turn out to be anything at all like the ones used in the current generation of computational models, critical, life-altering decisions will hinge on the interaction of poorly-defined and surprisingly subjective values: a “quality of life” metric, a vague category of “supercreative[10]” occupations, or other idiosyncrasies along these lines. The output generated by such a procedure may turn on half-clever abstractions, in which a complex circumstance resistant to direct measurement is represented by the manipulation of some more easily-determined proxy value: average walking speed stands in for the more inchoate “pace” of urban life, while the number of patent applications constitutes an index of “innovation.”

Even beyond whatever doubts we may harbor as to the ability of algorithms constructed in this way to capture urban dynamics with any sensitivity, the element of the arbitrary we see here should give us pause. Given the significant scope for discretion in defining the variables on which any such thing is founded, we need to understand that the authorship of an algorithm intended to guide the distribution of civic resources is itself an inherently political act. And at least as things stand today, neither in the Siemens material nor anywhere else in the smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.

Encoded in public policy, and applied transparently, dispassionately and in a manner free from politics: A review of the relevant history suggests that policy recommendations derived from computational models are only rarely applied to questions as politically sensitive as resource allocation without some intermediate tuning taking place. Inconvenient results may be suppressed, arbitrarily overridden by more heavily-weighted decision factors, or simply ignored.

The best-documented example of this tendency remains the work of the New York City-RAND Institute, explicitly chartered to implant in the governance of New York City “the kind of streamlined, modern management that Robert McNamara applied in the Pentagon with such success[11]” during his tenure as Secretary of Defense (1961-1968). The statistics-driven approach that McNamara’s Whiz Kids had so famously brought to the prosecution of the war in Vietnam, variously thought of as “systems analysis” or “operations research,” was first applied to New York in a series of studies conducted between 1973 and 1975, in which RAND used FDNY incident response-time data[12] to determine the optimal distribution of fire stations.

Methodological flaws undermined the effort from the outset. RAND, for simplicity’s sake, chose to use the time a company arrived at the scene of a fire as the basis of their model, rather than the time at which that company actually began fighting the fire; somewhat unbelievably, for anyone with the slightest familiarity with New York City, RAND’s analysts then compounded their error by refusing to acknowledge traffic as a factor in response time[13]. Again, we see some easily-measured value used as a proxy for a reality that is harder to quantify, and again we see the distortion of ostensibly neutral results by the choices made by an algorithm’s designers. But the more enduring lesson for proponents of data-driven policy has to do with how the study’s results were applied. Despite the mantle of coolly “objective” scientism that systems analysis preferred to wrap itself in, RAND’s final recommendations bowed to factionalism within the Fire Department, as well as the departmental leadership’s need to placate critical external constituencies; the exercise, in other words, turned out to be nothing if not political.

The consequences of RAND’s intervention were catastrophic. Following their recommendations, fire battalions in some of the most vulnerable sections of the city were decommissioned, while the department opened other stations in low-density, low-threat areas; the spatial distribution of firefighting assets remaining actually prevented resources from being applied where they were most critically needed. Great swaths of the city’s poorest neighborhoods burned to the ground as a direct result — most memorably the South Bronx, but immense tracts of Manhattan and Brooklyn as well. Hundreds of thousands of residents were displaced, many permanently, and the unforgettable images that emerged fueled perceptions of the city’s nigh-apocalyptic unmanageability that impeded its prospects well into the 1980s. Might a less-biased model, or a less politically-skewed application of the extant findings, have produced a more favorable outcome? This obviously remains unknowable…but the human and economic calamity that actually did transpire is a matter of public record.

Examples like this counsel us to be wary of claims that any autonomous system will ever be entrusted with the regulation and control of civic resources — just as we ought to be wary of claims that the application of some single master algorithm could result in an Pareto-efficient distribution of resources, or that the complex urban ecology might be sufficiently characterized in data to permit the effective operation of such an algorithm in the first place. For all of the conceptual flaws we’ve identified in the Siemens proposition, though, it’s the word “goal” that just leaps off the page. In all my thinking about cities, it has frankly never occurred to me to assert that cities have goals. (What is Cleveland’s goal? Karachi’s?) What is being suggested here strikes me as a rather profound misunderstanding of what a city is. Hierarchical organizations can be said to have goals, certainly, but not anything as heterogeneous in composition as a city, and most especially not a city in anything resembling a democratic society.

By failing to account for the situation of technological devices inside historical space and time, the diversity and complexity of the urban ecology, the reality of politics or, most puzzlingly of all, the “normal accidents[14]” all complex systems are subject to, Siemens’ vision of cities perfectly regulated by autonomous smart systems thoroughly disqualifies itself. But it’s in this depiction of a city as an entity with unitary goals that it comes closest to self-parody.

If it seems like breaking a butterfly on a wheel to subject marketing copy to this kind of dissection, I am merely taking Siemens and the other advocates of the smart city at their word, and this is what they (claim to) really believe. When pushed on the question, of course, some individuals working for enterprises at the heart of the smart-city discourse admit that what their employers actually propose to do is distinctly more modest: they simply mean to deploy sensors on municipal infrastructure, and adjust lighting levels, headway or flow rates to accommodate real-time need. If this is the case, perhaps they ought to have a word with their copywriters, who do the endeavor no favors by indulging in the imperial overreach of their rhetoric. As matters now stand, the claim of perfect competence that is implicit in most smart-city promotional language — and thoroughly explicit in the Siemens material — is incommensurate with everything we know about the way technical systems work, as well as the world they work in. The municipal governments that constitute the primary intended audience for materials like these can only be advised, therefore, to approach all such claims with the greatest caution.


Notes

[1] Siemens Corporation. “Sustainable Buildings — Networked Technologies: Smart Homes and Cities,” Pictures of the Future, Fall 2008.
foryoutou.se/siemenstotal

[2] For example, in New York City, an anonymous survey of “hundreds of retired high-ranking [NYPD] officials” found that “tremendous pressure to reduce crime, year after year, prompted some supervisors and precinct commanders to distort crime statistics” they submitted to the centralized COMPSTAT system. Chen, David W., “Survey Raises Questions on Data-Driven Policy,” The New York Times, 08 February 2010.
foryoutou.se/jukingthenypd

[3] Simon, David, Kia Corthron, Ed Burns and Chris Collins, The Wire, Season 4, Episode 9: “Know Your Place,” first aired 12 November 2006.

[4] Asian Business Daily. “Subway CCTV was used to watch citizens’ bare skin sneakily,” 16 July 2013. (In Korean.)
foryoutou.se/seoulcctv

[5] Fletcher, Jim, IBM Distinguished Engineer, and Guruduth Banavar, Vice President and Chief Technology Officer for Global Public Sector⁠, personal communication, 08 June 2011.

[6] Migurski, Michal. “Visualizing Urban Data,” in Segaran, Toby and Jeff Hammerbacher, Beautiful Data: The Stories Behind Elegant Data Solutions, O’Reilly Media, Sebastopol CA, 2012: pp. 167-182. See also Migurski, Michal. “Oakland Crime Maps X,” tecznotes, 03 March 2008.
foryoutou.se/oaklandcrime

[7] See, as well, Sen’s dissection of the inherent conflict between even mildly liberal values and Pareto optimality. Sen, Amartya Kumar. “The impossibility of a Paretian liberal.” Journal of Political Economy Volume 78 Number 1, Jan-Feb 1970.
foryoutou.se/nopareto

[8] Forrester, Jay. Urban Dynamics, The MIT Press, Cambridge, MA, 1969.

[9] See Flood, Joe. The Fires: How a Computer Formula Burned Down New York City — And Determined The Future Of American Cities, Riverhead Books, New York, 2010.

[10] See, e.g. Bettencourt, Luís M.A. et al. “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences, Volume 104 Number 17, 24 April 2007, pp. 7301-7306.
foryoutou.se/superlinear

[11] Flood, ibid., Chapter Six.

[12] Rider, Kenneth L. “A Parametric Model for the Allocation of Fire Companies,” New York City-RAND Institute report R-1615-NYC/HUD, April 1975; Kolesar, Peter. “A Model for Predicting Average Fire Company Travel Times,” New York City-RAND Institute report R-1624-NYC, June 1975.
foryoutou.se/randfirecos
foryoutou.se/randfiretimes

[13] See the Amazon interview with Fires author Joe Flood.
foryoutou.se/randfires

[14] Perrow, Charles. Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984.

Stealthy, slippery, crusty, prickly and jittery redux: On design interventions intended to make space inhospitable

From Mitchell Duneier’s Sidewalk, 1999. The context is a discussion of various physical interventions that have been made in the fabric of New York City’s Pennsylvania Station:

On a walk through the station with [director of "homeless outreach" Richard] Rubel and the photographer Ovie Carter one summer day in 1997…I found it essentially bare of unhoused people. I told Rubel of my interest in the station as a place that had once sustained the lives of unhoused people, and asked if he could point out changes that had been made so that it would be less inviting as a habitat where subsistence elements could be found in one place. He pointed out a variety of design elements of the station which had been transformed, helping to illustrate aspects of the physical structure that had formerly enabled it to serve as a habitat.

He took us to a closet near the Seventh Avenue entrance. “We routinely had panhandlers gathering here, and you could see this closet area where that heavy bracket is, that was a niche.”

“What do you mean by ‘a niche’?”

“This spot right over here was where a panhandler would stand. So my philosophy is, you don’t create nooks and corners. You draw people out into the open, so that your police officers and your cameras have a clean line of sight [emphasis added], so people can’t hide either to sleep or to panhandle.”

Next he brought us to a retail operation with a square corner. “Someone here can sleep and be protected by this line of sight. A space like this serves nobody’s purpose [emphasis added]. So if their gate closes, and somebody sleeps on the floor over here, they are lying undetected. So what you try to do is have people construct their building lines straight out, so you have a straight line of sight with no areas that people can hide behind.”

Next he brought us to what he called a “dead area.” “I find this staircase provides limited use to the station. Amtrak does not physically own this lobby area. We own the staircase and the ledge here. One of the problems that we have in the station is a multi-agency situation where people know what the fringe areas are, the gray areas, that are less than policed. So they serve as focal points for the homeless population. We used to see people sleeping on this brick ledge every night. I told them I wanted a barrier that would prevent people from sleeping on both sides of this ledge. This is an example fo turning something around to get the desired effect.”

“Another situation we had was around the fringes of the taxi roadway. We had these niches that were open. The Madison Square Garden customers that come down from the games would look down and see a community of people living there, as well as refuse that they leave behind.” He installed a fencing project to keep the homeless from going behind corners, drawing them out into the open [emphasis added]. “And again,” said Rubel, “the problem has gone away.”

This logic, of course, is immanent in the design of a great deal of contemporary public urban space, but you rarely find it expressed quite as explicitly as it is here. Compare, as well, Jacobs (1961) on the importance to vibrant street life (and particularly of children’s opportunities for play) of an irregular building line at the sidewalk edge.

On augmenting reality

The following is the draft of a section from my forthcoming book, The City Is Here For You To Use, concerning various ways in which networked devices are used to furnish the mobile pedestrian with a layer of location-specific information superimposed onto the forward view — “augmented reality,” in other words. (The context is an extended discussion of four modes in which information is returned from the global network to the world so it may be engaged, considered and acted upon, which is why the bit here starts in medias res.)

As you see it here, the section is not quite in its final form; it hasn’t yet been edited for meter, euphony or flow, and in particular, some of the arguments toward the end remain too telescoped to really stand up to much inspection. Nevertheless, given the speed at which wearable AR is evolving, I thought it would be better to get this out now as-is, to garner your comments and be strengthened by them. I hope you enjoy it.

1

One seemingly potent way of returning networked information to the world would be if we could layer it directly over that which we perceive. This is the premise of so-called augmented reality, or AR, which proposes to furnish users with some order of knowledge about the world and the objects in it, via an overlay of informational graphics superimposed on the visual field. In principle, this augmentation is agnostic as to the mediating artifact involved, which could be the screen of a phone or tablet, a vehicle’s windshield, or, as Google’s Glass suggests, a lightweight, face-mounted reticle.

AR has its conceptual roots in informational displays developed for military pilots in the early 1960s, at the point when the performance of enemy fighter aircraft began to overwhelm a human pilot’s ability to react. In the fraught regime of jet-age dogfighting, even a momentary dip of the eyes to a dashboard-mounted instrument cluster could mean disaster. The solution was to project information about altitude, airspeed and the status of weapons and other critical aircraft systems onto a transparent pane aligned with the field of vision, a “head-up display.”

This notion turned to have applicability in fields beyond aerial combat, where the issue wasn’t so much reaction time as it was visual complexity. One early AR system was intended to help engineers make sense of the gutty tangle of hydraulic lines, wiring and control mechanisms in the fuselage of an airliner under construction; each component in the otherwise-hopeless confusion was overlaid with a visual tag identifying it by name, and colored according to the system it belonged to.

Other systems were designed to help people manage situations in which both time and the complexity of the environment were sources of pressure — for example, to aid first responders in dispelling the fog and chaos they’re confronted with upon arrival at the scene of an emergency. One prototype furnished firefighters with visors onto which structural diagrams of a burning building were projected, along with symbols indicating egress routes, the position of other emergency personnel, and the presence of electric wiring or other potentially dangerous infrastructural elements.

The necessity of integrating what were then relatively crude and heavy cameras, motion sensors and projectors into a comfortably wearable package limited the success of these early efforts — and this is to say nothing of the challenges posed by the difficulty of establishing a reliable network connection to a mobile unit. But the conceptual heavy lifting done to support these initial forays produced a readymade discourse, waiting for the day augmentation might be reinstantiated in smaller, lighter, more capable hardware.

That is a point we appear to have arrived at with the advent of the smartphone. As we’ve seen, the smartphone handset can be thought of as a lamination together of several different sensing and presentation technologies, subsets of which can be recombined with one another to produce distinctly different ways of engaging networked information. Bundle a camera, accelerometer/gyroscope, and display screen in a single networked handset, and what you have in your hands is indeed an artifact capable of sustaining rudimentary augmentation. Add GPS functionality and a three-dimensional model of the world — either maintained onboard the device, or resident in the cloud — and a viewer can be offered location-specific information, registered with and mapped onto the surrounding urban fabric.

In essence, phone-based AR treats the handset like the transparent pane of a cockpit head-up display: you hold it before you, its camera captures the forward-facing view, and this is rendered on the screen transparently but for whatever overlay of information is applied. Turn and the on-screen view turns with you, tracked (after a momentary stutter) by the grid of overlaid graphics. And those graphics can provide anything the network can: identification, annotation, direction or commentary.

It’s not hard to see why developers and enthusiasts might jump at this potential, even given the sharp limits imposed by the phone as platform. We move through the world and we act in it, but the knowledge we base our movements and actions on is always starkly less than what it might be. And we pay the price for this daily, in increments of waste, frustration, exhaustion and missed opportunity. By contrast, the notion that everything the network knows might be brought to bear on someone or -thing standing before us, directly there, directly present, available to anyone with the wherewithal to sign a two-year smartphone contract and download an app — this is a deeply seductive idea. It offers the same aura of omnipotence, that same frisson of godlike power evoked by our new ability to gather, sift and make meaning of the traces of urban activity, here positioned as a direct extension of our own senses.

2

Why not take advantage of this capability? After all, the richness and complexity of city life confronts us with any number of occasions on which the human sensorium could do with a little help.

Let a few hundred neurons in the middle fusiform gyrus of the brain’s right hemisphere be damaged, or fail to develop properly in the first place, and the result is a disorder called prosopagnosia, more commonly known as faceblindness. As the name suggests, the condition deprives its victims of the ability to recognize faces and associate them with individuals; at the limit, someone suffering with a severe case may be entirely unable to remember what his or her loved ones look like. So central is the ability to recognize others to human socialization, though, that even far milder cases cause significant problems.

Sadly, this is something I can attest to from firsthand experience. Like an estimated 2.5%[1] of the population, I suffer from the condition, and even in the relatively attenuated form I’m saddled with, my broad inability to recognize people has caused more than a few experiences of excruciating awkwardness. At least once or twice a month I run into people on the street who clearly have some degree of familiarity with me, and find myself unable to come up with even a vague idea of who they might be; I’ll introduce myself to a woman at a party, only to have her remind me (rather waspishly, but who can blame her) that we’d worked together on a months-long project. Deprived of contextual cues — the time and location at which I usually meet someone, a distinctive hairstyle or mode of dress — I generally find myself no more able to recognize former colleagues or students than I can complete strangers. And as uncomfortable as this can be for me, I can only imagine how humiliating it is for the person on the other end of the encounter.

I long ago lost track of the number of times in my life at which I would have been grateful for some subtle intercessionary agent: something that might drop a glowing outline over the face of someone approaching me and remind me of his or her name[2], the occasion on which we met last, maybe even what we talked about on that occasion. It would spare both of us from mortification, and shield my counterpart from the inadvertent but real insult implied by my failure to recognize them. So the ambition of using AR in this role is lovely — precisely the kind of sensitive technical deployment I believe in, where technology is used to lower the barriers to socialization, and reduce or eliminate the awkwardnesses that might otherwise prevent us from better knowing one another.

But it’s hard to imagine any such thing being accomplished by the act of holding a phone up in front of my face, between us, forcing you to wait first for me to do so and then for the entire chain of technical events that must follow in order to fulfill the aim at the heart of the scenario. The device must acquire an image of your face with the camera, establish the parameters of that face from the image, and upload those parameters to the cloud via the fastest available connection, so they may be compared with a database of facial measurements belonging to known individuals; if a match is found, the corresponding profile must be located, and the appropriate information from that profile piped back down the connection so it may be displayed as an overlay on the screen image.

Too many articulated parts are involved in this interaction, too many dependencies — not least of which is the coöperation of a Facebook, a Google, or some other enterprise with a reasonably robust database of facial biometrics, and that is of course wildly problematic for other reasons. Better I should have confessed my confusion to you in the first place.

Perhaps a less technologically-intensive scenario would be better suited to the phone as platform for augmentation? How about helping a user find their way around the transit system, amidst all the involutions of the urban labyrinth?

3

Here we can weigh the merits of the use case by considering an actual, shipping product, Acrossair’s Nearest Subway app for the iPhone, first released in 2010[3]. Like its siblings for London and Paris, Nearest Tube and Nearest Metro, Nearest Subway uses open location data made available by the city’s transit authority to specify the positions of transit stops in three-dimensional space. On launch, the app loads a hovering scrim of simple black tiles featuring the name of each station, and icons of the lines that serve it; the tiles representing more distant stations are stacked atop those that are closer. Rotate, and the scrim of tiles rotates with you. Whichever way you face, you’ll see a tile representing the nearest subway station in the direction of view, so long as some outpost of the transit network lies along that bearing in the first place.

Nearest Subway is among the more aesthetically appealing phone-based AR applications, eschewing junk graphics for simple, text-based captions sensitively tuned to the conventions of each city’s transit system. If nothing else, it certainly does what it says on the tin. It is, however, almost completely worthless as a practical aid to urban navigation.

When aimed to align with the Manhattan street grid from the corner of 30th Street and First Avenue, Nearest Subway indicates that the 21st Street G stop in Long Island City is the closest subway station, at a distance of 1.4 miles in a north-northeasterly direction.

As it happens, there are a few problems with this. For starters, from this position the Vernon Boulevard-Jackson Avenue stop on the 7 line is 334 meters, or roughly four New York City blocks, closer than 21st Street, but it doesn’t appear as an option. This is either an exposure of some underlying lacuna in the transit authority’s database — unlikely, but as anyone familiar with the MTA understands implicitly, well within the bounds of possibility — or more probably a failure on Acrossair’s part to write code that retrieves these coordinates properly.


Just as problematically, the claimed bearing is roughly 55 degrees off. If, as will tend to be the case in Manhattan, you align yourself with the street grid, a phone aimed directly uptown will be oriented at 27 degrees east of due north, at which point Nearest Subway suggests that the 21st Street station is directly ahead of you. But it actually lies on an azimuth of 82 degrees; if you took the app at its word, you’d be walking uptown a long time before you hit anything even resembling a subway station. This is most likely to be a calibration error with the iPhone’s compass, but fairly or otherwise Nearest Subway shoulders the greater part of the blame here — as anyone familiar with computational systems has understood since the time of Babbage, if you put garbage in, you’ll get garbage out.

Furthermore, since by design the app only displays those stations roughly aligned with your field of vision, there’s no way for it to notify you that the nearest station may be directly behind your back. Unless you want to rotate a full 360 degrees, then, and make yourself look like a complete idiot in the process, the most practical way to use Nearest Subway is to aim the phone directly down, which makes a reasonably useful ring of directional arrows and distances pop up. (These, of course, could have been superimposed on a conventional map in the first place, without undertaking the effort of capturing the camera image and augmenting it with a hovering overlay of theoretically compass-calibrated information.)

However unfortunate these stumbles may be, they can all be resolved, addressed with tighter code, an improved user interface or a better bearing-determination algorithm. Acrossair could fix them all, though — enter every last issue in a bug tracker, and knock them down one by one — and that still wouldn’t address the primary idiocy of urban AR in this mode: from 30th Street and First Avenue, the 21st Street G stop is across the East River. You need to take a subway to get there in the first place. However aesthetically pleasing an interface may be, using it to find the closest station as the crow flies does you less than no good when you’re separated from it by a thousand meters of water.

Finally, Nearest Subway betrays a root-level misunderstanding of the relationship between a citydweller and a transportation network. In New York City, as in every other city with a complex underground transit system, you almost never find yourself in a situation where you need to find the station that’s nearest in absolute terms to begin with; it’s far more useful to find the nearest station on a line that gets you where you want to go. Even at the cost of cluttering what’s on the screen, then, the very first thing the would-be navigator of the subway system needs is a way to filter the options before them by line.

I raise these points not to park all of the blame at Acrossair’s door, but to suggest that AR itself is badly unsuited to this role, at least when handled in this particular way. It takes less time to load and use a map than it does to retrieve the same information from an augmentive application, and the map provides a great deal more of the context so necessary to orienting yourself in the city. At this point in technological evolution, then, more conventional interface styles will tend to furnish a user with relevant information more efficiently, with less of the latency, error and cruft that inevitably seem to attend the attempt to superimpose it over the field of vision.

4

If phone-based augmentation performs poorly as social lubricant or aid to urban navigation, what about another role frequently proposed for AR, especially by advocates in the cultural heritage sector? This use case hinges on the argument that by superimposing images or other vestiges of the past of a place directly over its present, AR effectively endows its users with the ability to see through time.

This might not make much sense at all in Songdo, or Masdar, or any of the other new cities now being built from scratch on greenfield sites. But anyone who lives in a place old enough to have felt the passage of centuries knows that history can all too easily be forgotten by the stones of the city. Whatever perturbations from historical events may still be propagating through the various flows of people, matter, energy and information that make a place, they certainly aren’t evident to casual inspection. An augmented view returning the layered past to the present, in such a way as to color our understanding of the things all around us, might certainly prove to be more emotionally resonant than any conventional monument.

Byzantium, old Edo, Roman Londinium, even New Amsterdam: each of these historical sites is rife with traces we might wish to surface in the city occupying the same land at present. Locales overwhelmed by more recent waves of colonization, gentrification or redevelopment, too, offer us potent lenses through which to consider our moment in time. It would surely be instructive to retrieve some record of the jazz- and espresso-driven Soho of the 1950s and layer it over what stands there at present; the same goes for the South Bronx of 1975. But traversed as it was during the twentieth century by multiple, high-intensity crosscurrents of history, Berlin may present the ultimate terrain on which to contemplate recuperation of the past.

This is a place where pain, guilt and a sense of responsibility contend with the simple desire to get on with things; no city I’m familiar with is more obsessively dedicated to the search for a tenable balance between memory and forgetting. The very core of contemporary Berlin is given over to a series of puissant absences and artificially-sustained presences, from the ruins of Gestapo headquarters, now maintained as a museum called Topography of Terror, to the remnants of Checkpoint Charlie. A long walk to the east out leafy Karl-Marx-Allee — Stalinallee, between 1949 and 1961 — takes you to the headquarters of the Stasi, the feared secret police of the former East Germany, also open to the public as a museum. But there’s nowhere in Berlin where the curious cost of remembering can be more keenly felt than in the field of 2,711 concrete slabs at the corner of Ebertstrasse and Hannah-Arendt-Strasse. This is the Memorial to the Murdered Jews of Europe, devised by architect Peter Eisenman, with early conceptual help from the sculptor Richard Serra.

Formally, the grim array is the best thing Eisenman has ever set his hand to, very nearly redemptive of a career dedicated to the elevation of fatuous theory over aesthetic coherence; perhaps it’s the Serra influence. But as a site of memory, the Monument leaves a great deal to be desired. It’s what Michel Foucault called a heterotopia: something set apart from the ordinary operations of the city, physically and semantically, a place of such ponderous gravity that visitors don’t quite know what to make of it. On my most recent visit, the canyons between the slabs rang with the laughter of French schoolchildren on a field trip; the children giggled and flirted and shouted to one another as they leapt between the stones, and whatever the designer’s intent may have been, any mood of elegy or commemoration was impossible to establish, let alone maintain.

Roughly two miles to the northeast, on the sidewalk in front of a doner stand in Mitte, is a memorial of quite a different sort. Glance down, and you’ll see the following words, inscribed into three brass cubes set side by side by side between the cobblestones:

HIER WOHNTE
ELSA GUTTENTAG
GEB. KRAMER
JG. 1883

DEPORTIERT 29.11.1942
ERMORDET IN
AUSCHWITZ

HIER WOHNTE
KURT GUTTENTAG
JG. 1877

DEPORTIERT 29.11.1942
ERMORDET IN
AUSCHWITZ

HIER WOHNTE
ERWIN BUCHWALD
JG. 1892

DEPORTIERT 1.3.1943
ERMORDET IN
AUSCHWITZ

Ermordet in Auschwitz: that is, on specific dates in November of 1942 and March of the next year, the named people living at this address were taken across this very sidewalk and forcibly transported hundreds of miles east by the machinery of their own government, to a country they’d never known and a facility expressly designed to murder them. The looming façades around you were the last thing they ever saw as free people.

It’s in the dissonance between the everyday bustle of Mitte and these implacable facts that the true horror resides — and that’s precisely what makes the brass cubes a true memorial, indescribably more effective than Eisenman’s. The brass cubes, it turns out, are Stolpersteine, or “stumbling blocks,” a project of artist Gunter Demnig; these are but three of what are now over 32,000 that Demnig has arranged to have placed in some 700 cities. The Stolpersteine force us to read this stretch of unremarkable sidewalk in two ways simultaneously: both as a place where ordinary people go placidly about their ordinary business, just as they did in 1942, and as one site of a world-historical, continental-scale ravening.

The stories etched in these stones are the kind of facts about a place that would seem to yield to a strategy of augmentation. The objection could certainly be raised that I found them so resonant precisely because I didn’t see them every day, and that their impact would very likely fade with constant exposure; we might call this the evil of banality. But being compelled to see and interpret the mundane things I did in these streets through the revenant past altered my consciousness, in ways subtler and longer-lasting than anything Eisenman’s sepulchral array of slabs was able to achieve. AR would merely make the metaphor literal — in fact, it’s easy for me to imagine the disorienting, decentering, dis-placing impact of having to engage the world through a soft rain of names, overlaid onto the very places from which their owners were stolen.

But once again, it’s hard to imagine this happening via the intercession of a handset. Nor are the qualities that make smartphone-based AR so catastrophically clumsy, in virtually every scenario of use, particularly likely to change over time.

The first is the nature of functionality on the smartphone. As we’ve seen, the smartphone is a platform on which each discrete mode of operation is engaged via a dedicated, single-purpose app. Any attempt at augmenting the environment, therefore, must be actively and consciously invoked, to the exclusion of other useful functionality. The phone, when used to provide such an overlay, cannot also and at the same time be used to send a message, look up an address, buy a cup of coffee, or do any of the other things we now routinely expect of it.

The second reservation is physical. Providing the user with a display surface for graphic annotation of the forward view simply isn’t what the handset was designed to do. It must be held before the eyes like a pane of glass in order for the augmented overlay to work as intended. It hardly needs to be pointed out that this gesture is not one particularly well-suited to the realities of urban experience. It has the doubly unappealing quality of announcing the user’s distraction and vulnerability to onlookers, while simultaneously ensuring that the device is held in the weak grip of the extended arm — a grasp from which it may be plucked with relative ease.

Taken together, these two impositions strongly undercut the primary ostensible virtue of an augmented view, which is its immediacy. The sole genuine justification for AR is the idea that information is simply there, copresent with that you’re already looking at and able to be assimilated without thought or effort.

That sense of effortlessness is precisely what an emerging class of wearable mediators aims to provide for its users. The first artifact of this class to reach consumers is Google’s Glass, which mounts a high-definition, forward-facing camera, a head-up reticle and the microphone required by the natural-language speech recognition interface on a lightweight aluminum frame. While Glass poses any number of aesthetic, practical and social concerns — all of which remain to be convincingly addressed, by Google or anyone else — it does at least give us a way to compare hands-free, head-mounted AR with the handset-based approach.

Would any of the three augmentation scenarios we explored be improved by moving the informational overlay from the phone to a wearable display?

5

A system designed to mitigate my prosopagnosia by recognizing faces for me would assuredly be vastly better when accessed via head-mounted interface; in fact, that’s the only scenario of technical intervention in relatively close-range interpersonal encounters that’s credible to me. The delay and physical awkwardness occasioned by having to hold a phone between us goes away, and while there would still be a noticeable saccade or visual stutter as I glanced up to read your details off my display, this might well be preferable to not being remembered at all.

That is, if we can tolerate the very significant threats to privacy involved, which only start with Google’s ownership of or access to the necessary biometric database. There’s also the question of their access to the pattern of my requests, and above all the one fact inescapably inherent to the scenario: that people are being identified as being present in a certain time and place, without any necessity whatsoever of securing consent on their part. By any standard, this is a great deal of risk to take on, all to lubricate social interactions for 2.5% of the population.

Nearest Subway, as is, wouldn’t be improved by presentation in the line of sight. Given what we’ve observed about the way people really use subway networks, information about the nearest station in a given direction wouldn’t be of any greater utility when splashed on a head-up display than it is on the screen of a phone. Whatever the shortcomings of this particular app, though, they probably don’t imply anything in particular about the overall viability of wearable AR in the role of urban navigation, and in many ways the technology does seem rather well-suited to the wayfinding challenges faced by the pedestrian.

Of the three scenarios considered here, though, it’s AR’s potential to offer novel perspectives on the past of a place that would be most likely to benefit from the wearable approach. We would quite literally see the quotidian environment through the lens of a history superimposed onto it. So equipped, we could more easily plumb the psychogeographical currents moving through a given locale, better understand how the uses of a place had changed over time, or hadn’t. And because this layer of information could be selectively surfaced — invoked and banished via voice command, toggled on or off at will — presenting information in this way might well circumvent the potential for banality through overfamiliarization that haunts even otherwise exemplary efforts like Demnig’s Stolpersteine.

And this suggests something about further potentially productive uses for augmentive mediators like Glass. After all, there are many kinds of information that may be germane to our interpretation of a place, yet effectively invisible to us, and historical context is just one of them. If our choices are shaped by dark currents of traffic and pricing, crime and conviviality, it’s easy to understand the appeal of any technology proposing that these dimensions of knowledge be brought to bear on that which is seen, whether singly or in combination. The risk of bodily harm, whatever its source, might be rendered as a red wash over the field of vision; point-by-point directions as a bright and unmistakable guideline reaching into the landscape. In fact any pattern of use and activity, so long as its traces were harvested by some data-gathering system and made available to the network, might be made manifest to us in this way.

Some proposed uses of mediation are more ambitious still, pushing past mere annotation of the forward view to the provision of truly novel modes of perception — for example, the ability to “see” radiation at wavelengths beyond the limits of human vision, or even to delete features of the visual environment perceived as undesirable[4]. What, then, keeps wearable augmentation from being the ultimate way for networked citizens to receive and act on information?

6

The approach of practical, consumer-grade augmented reality confronts us with a interlocking series of concerns, ranging from the immediately practical to the existential.

A first set of reservations centers on the technical difficulties involved in the articulation of an acceptably high-quality augmentive experience. We’ve so far bypassed discussion of these so we could consider different aspects of the case for AR, but ultimately they’re not of a type that allows anyone to simply wave them away.

At its very core, the AR value proposition subsists in the idea that interactions with information presented in this way are supposed to feel “effortless,” but any such effortlessness would require the continuous (and continuously smooth) interfunctioning of a wild scatter of heterogeneous elements. In order to make good on this promise, a mediation apparatus would need to fuse all of the following elements: a sensitively-designed interface; the population of that interface with accurate, timely, meaningful and actionable information; and a robust, high-bandwidth connection to the networked assets furnishing that information from any point in the city, indoors or out. Even putting questions of interface design to the side, the technical infrastructure capable of delivering the other necessary elements reliably enough that the attempt at augmentation doesn’t constitute a practical and social hazard in its own right does not yet exist — not anywhere in North America, anyway, and not this year or next. The hard fact is that for a variety of reasons having to do with national spectrum policy, a lack of perceived business incentives for universal broadband connectivity, and other seemingly intractable circumstances, these issues are nowhere near being ironed out.

In the context of augmentation, as well, the truth value of representations made about the world acquires heightened significance. By superimposing information directly on its object, AR arrogates to itself a peculiar kind of claim to authority, a claim of a more aggressive sort than that implicit in other modes of representation, and therefore ought to be held to a higher standard of completeness and accuracy[5]. As we saw with Nearest Subway, though, an overlay can only ever be as good as the data feeding it, and the augurs in this respect are not particularly reassuring. Right now, Google’s map of the commercial stretch nearest to my apartment building provides labels for only four of the seven storefront businesses on the block, one of which is inaccurately identified as a restaurant that closed many years ago. If even Google, with all the resources it has at its disposal, struggles to provide its users with a description of the streetscape that is both comprehensive and correct, how much more daunting will other actors find the same task?

Beyond this are the documented problems with visual misregistration[6] and latency that are of over a decade’s standing, and have not been successfully addressed in that time — if anything, have only been exacerbated by the shift to consumer-grade hardware. At issue is the mediation device’s ability to track rapid motions of the head, and smoothly and accurately realign any graphic overlay mapped to the world; any delay in realignment of more than a few tens of milliseconds is conspicuous, and risks causing vertigo, nausea and problems with balance and coordination. The initial release of Glass, at least, wisely shies away from any attempt to superimpose such overlays, but the issue must be reckoned with at some point if useful augmentive navigational applications are ever to be developed.

7

Another set of concerns centers on the question of how long such a mediator might comfortably be worn, and what happens after it is taken off. This is of especial concern given the prospect that one or another form of wearable AR might become as prominent in the negotiation of everyday life as the smartphone itself. There is, of course, not much in the way of meaningful prognostication that can be made ahead of any mass adoption, but it’s not unreasonable to build our expectations on the few things we do know empirically.

Early users of Google’s Glass report disorientation upon removing the headset, after as few as fifteen minutes of use — a mild one, to be sure, and easily shaken off, from all accounts the sort of uneasy feeling that attends staring overlong at an optical illusion. If this represents the outer limit of discomfort experienced by users, it’s hard for me to believe that it would have much impact on either the desirability of the product or people’s ability to function after using it. But further hints as to the consequences of long-term use can be gleaned from the testimony of pioneering researcher Steve Mann, who has worn a succession of ever-lighter and more-capable mediation rigs all but continuously since the mid-1980s. And his experience would seem to warrant a certain degree of caution: Mann, in his own words, early on “developed a dependence on the apparatus,” and has found it difficult to function normally on the few occasions he has been forcibly prevented from accessing his array of devices.

When deprived of his set-up for even a short period of time, Mann experiences “profound nausea, dizziness and disorientation”; he can neither see clearly nor concentrate, and has difficulty with basic cognitive and motor tasks[7]. He speculates that over many years, his neural wiring has adapted to the continuous flow of sensory information through his equipment, and this is not an entirely ridiculous thing to think. At this point, the network of processes that constitutes Steve Mann’s brain — that in some real albeit reductive sense constitutes Steve Mann — lives partially outside his skull.

The objection could be made that this is always already the case, for all of us — that some nontrivial part of everything that make us what we are lives outside of us, in the world, and that Mann’s situation is only different in that much of his outboard being subsists in a single, self-designed apparatus. But if anything, this makes the prospect of becoming physiologically habituated to something like Google Glass still more worrisome. It’s precisely because Mann developed and continues to manage his own mediation equipment that he can balance his dependency on it with the relative freedom of action enjoyed by someone who for the most part is able to determine the parameters under which that equipment operates.

If Steve Mann has become a radically hybridized consciousness, at least he has a legitimate claim to ownership and control over all of the places where that consciousness is instantiated. By contrast, all of the things a commercial product like Glass can do for the user rely on the ongoing provision of a service — and if there’s anything we know about services, it’s that they can be and are routinely discontinued at will, as the provider fails, changes hands, adopts a new business strategy or simply reprioritizes.

8

A final set of strictly practical concerns have to do with the collective experience of augmentation, or what implications our own choice to be mediated in this way might hold for the experience of others sharing the environment.

For all it may pretend to transparency, literally and metaphorically, any augmentive mediator by definition imposes itself between the wearer and the phenomenal world. This, of course, is by no means a quality unique to augmented reality. It’s something AR has in common with a great many ways we already buffer and mediate what we experience as we move through urban space, from listening to music to wearing sunglasses. All of these impose a certain distance between us and the full experiential manifold of the street, either by baffling the traces of it that reach our senses, or by offering us a space in which we can imagine and project an alternative narrative of our actions.

But there’s a special asymmetry that haunts our interactions with networked technology, and tends to undermine our psychic investment in the immediate physical landscape; if “cyberspace is where you are when you’re on the phone,” it’s certainly also the “place” you are when you text or tweet someone while walking down the sidewalk. I’ve generally referred to what happens when someone moves through the city while simultaneously engaged in some kind of remote interaction as a condition of “multiple adjacency,” but of course it’s really no such thing: so far, at least, only one mode of spatial experience can be privileged at a given time. And if it’s impossible to participate fully in both of these realms at once, one of them must lose out.

Watch what happens when a pedestrian first becomes conscious of receiving a call or a text message, the immediate damming they cause in the sidewalk flow as they pause to respond to it. Whether the call is made hands-free or otherwise doesn’t really seem to matter; the cognitive and emotional investment in what transpires in the interface is what counts, and this investment is generally so much greater than it is in the surroundings that street life clearly suffers as a result. The risk inherent in this divided attention appears to be showing up in the relevant statistics in the form of an otherwise hard-to-account-for upturn in accidents involving pedestrian fatalities[8], where such numbers had been falling for years. This is a tendency that is only likely to be exacerbated by augmentive mediation, particularly where content of high inherent emotional involvement is concerned.

9

At this moment in time, it would be hard to exaggerate the appeal the prospect of wearable augmentation holds for its vocal cohort of enthusiasts within the technology community. This fervor can be difficult to comprehend, so long as AR is simply understood to refer to a class of technologies aimed at overlaying the visual field with information about the objects and circumstances in it.

What the discourse around AR shares with other contemporary trans- and posthuman narratives is a frustration with the limits of the flesh, and a frank interest in transcending them through technical means. To advocates, the true appeal of projects like Google’s Glass is that they are first steps toward the fulfillment of a deeper promise: that of becoming-cyborg. Some suggest that ordinary people mediate the challenges of everyday life via complex informational dashboards, much like those first devised by players of World of Warcraft and similar massively multiplayer online role-playing games. The more fervent dream of a day when their capabilities are enhanced far beyond the merely human by a seamless union of organic consciousness with networked sensing, processing, analytic and storage assets.

Beyond the profound technical and practical challenges involved in achieving any such goal, though, someone not committed to one or another posthuman program may find that they have philosophical reservations with this notion, and what it implies for urban life. These may be harder to quantify than strictly practical objections, but any advocate of augmentation technologies who is also interested in upholding the notion of a city as a shared space will have to come to some reckoning with them.

Anyone who cares about what we might call the full bandwidth of human communication — very much including transmission and reception of those cues vital to understanding, but only present beneath the threshold of conscious perception — ought to be concerned about the risk posed to interpersonal exchanges by augmentive mediation. Wearable devices clearly have the potential to exacerbate existing problems of self-absorption and mutual inconsideration[9]. Although in principle there’s no reason such devices couldn’t be designed to support or even enrich the sense of intersubjectivity, what we’ve seen about the technologically-mediated pedestrian’s unavailability to the street doesn’t leave us much room for optimism on this count. The implication is that if the physical environment doesn’t fully register to a person so equipped, neither will other people.

Nor is the body by any means the only domain that the would-be posthuman subject may wish to transcend via augmentation. Subject as it is to the corrosive effects of entropy and time, forcing those occupying it to contend with the inconvenient demands of others, the built environment is another. Especially given current levels of investment in physical infrastructure in the United States, there is a very real risk that those who are able to do so will prefer retreat behind a wall of mediation to the difficult work of being fully present in public. At its zenith, this tendency implies both a dereliction of public space and an almost total abandonment of any notion of a shared public realm. This is the scenario imagined by science-fiction author Vernor Vinge in Rainbows End (2006), in which people interact with the world’s common furniture through branded thematic overlays of their choice; it’s a world that can be glimpsed in the matter-of-factly dystopian videos of Keiichi Matsuda, in which a succession of squalid environments come to life only when activated by colorful augmentive animations.

The most distressing consequences of such a dereliction would be felt by those left behind in any rush toward augmentation. What happens when the information necessary to comprehend and operate an environment is not immanent to that environment, but has become decoupled from it? When signs, directions, notifications, alerts and all the other instructions necessary to the fullest use of the city appear only in an augmentive overlay, and as is inevitably the case, that overlay is available to some but not others[10]? What happens to the unaugmented human under such circumstances? The perils would surely extend beyond a mere inability to act on information; the non-adopter of a particularly hegemonic technology almost always places themselves at jeopardy of being seen as a willful transgressor of norms, even an ethical offender. Anyone forgoing augmentation, for whatever reason, may find that they are perceived as somehow less than a full member of the community, with everything that implies for the right to be and act in public.

The deepest critique of all those lodged against augmented reality is sociologist Anne Galloway’s, and it is harder to answer. Galloway suggests that the discourse of computational augmentation, whether consciously or otherwise, “position[s] everyday places and social interactions as somewhat lacking or in need of improvement.” Again there’s this Greshamization, this sense of a zero-sum relationship between AR and a public realm already in considerable peril just about everywhere. Maybe the emergence of these systems will spur us to some thought as to what it is we’re trying so hard to augment. Philip K. Dick once defined reality as “that which refuses to go away when you stop believing in it,” and it’s this bedrock quality of universal accessibility — to anyone at all, at any time of his or her choosing — that constitutes its primary virtue. If nothing else, reality is the one platform we all share, a ground we can start from in undertaking the arduous and never-comfortable process of determining what else we might agree upon. To replace this shared space with the million splintered and mutually inconsistent realities of individual augmentation is to give up on the whole pretense that we in any way occupy the same world, and therefore strikes me as being deeply inimical to the urban project as I understand it. A city in which the physical environment has ceased to function as a common reference frame is, at the very least, terribly inhospitable soil for democracy, solidarity or simple fellow-feeling to take root in.

It may well be that this concern is overblown. There is always the possibility that augmented reality never will amount to very much, or that after a brief period of consideration it’s actively rejected by the mainstream audience. Within days of the first significant nonspecialist publicity around Google Glass, Seattle dive bar The 5 Point became the first commercial establishment known to have enacted a ban[11] on the device, and if we can fairly judge from the rather pungent selection of terms used to describe Glass wearers in the early media commentary, it won’t be the last. By the time you read these words, these weak signals may well have solidified into some kind of rough consensus, at least in North America, that wearing anything like Glass in public space constitutes a serious faux pas. Perhaps this and similar AR systems will come to rest in a cultural-aesthetic purgatory like that currently occupied by Bluetooth headsets, and if that does turn out to be the case, any premature worry about the technology’s implications for the practice of urban democracy will seem very silly indeed.

But something tells me that none of the objections we’ve discussed here will prove broadly dissuasive, least of all my own personal feelings on the subject. For all the hesitations anybody may have, and for all the vulnerabilities even casual observers can readily diagnose in the chain of technical articulations that produces an augmentive overlay, it is hard to argue against a technology that glimmers with the promise of transcendence. Over anything beyond the immediate near term, some form of wearable augmentive device does seem bound to take a prominent role in returning networked information to the purview of a mobile user at will, and thereby in mediating the urban experience. The question then becomes what kind(s) of urbanity will be produced by people endowed with this particular set of capabilities, individually and collectively, and how we might help the unmediated contend with cities unlike any they have known, enacted for the convenience of the ambiguously transhuman, under circumstances whose depths have yet to be plumbed.


Notes on this section
[1] Grüter T, Grüter M, Carbon CC (2008). “Neural and genetic foundations of face recognition and prosopagnosia”. J Neuropsychol 2 (1): 79–97.

[2] For early work toward this end, see http://www.cc.gatech.edu/~thad/p/journal/augmented-reality-through-wearable-computing.pdf. The overlay of a blinking outline or contour used as an identification cue, incidentally, has long been a staple of science-fictional information displays, showing up in pop culture as far back as the late 1960s. The earliest appearance I can locate is 2001: A Space Odyssey (1968), in which the navigational displays of both the Orion III spaceplane and Discovery itself relied heavily on the trope — this, presumably, because they were produced by the same contractor, IBM. See also Pete Shelley’s music video for “Homosapien” (1981) and the traverse corridors projected through the sky of Blade Runner’s Los Angeles (1982).

[3] As always, I caution the reader that the specifics of products and services, their availability will certainly change over time. All comments here regarding Nearest Subway pertain to v1.4.

[4] See discussion of “Superplonk” in [a later section]. http://m.spectrum.ieee.org/podcast/geek-life/profiles/steve-manns-better-version-of-reality

[5] At the very least, user interface should offer some kind of indication as to the confidence of a proffered identification, and perhaps how that determination was arrived at. See [a later section] on seamfulness.

[6] Azuma, “Registration Errors in Augmented Reality,” 1997.

http://www.cs.unc.edu/~azuma/azuma_AR.html

[7] http://www.nytimes.com/2002/03/14/technology/at-airport-gate-a-cyborg-unplugged.html

[8] See Governors Highway Safety Association, “Spotlight on Highway Safety: Pedestrian Fatalities by State,” 2010. http://www.ghsa.org/html/publications/pdf/spotlights/spotlight_ped.pdf; similarly, a recent University of Utah study found that the act of immersion in a conversation, rather than any physical aspect of use, is the primary distraction while driving and talking on the phone. That hands-free headset may not keep you out of a crash after all. http://www.informationweek.com/news/showArticle.jhtml?articleID=205207840

[9] A story on the New York City-based gossip site Gawker expressed this point of view directly, if rather pungently: “If You Wear Google’s New Glasses, You Are An Asshole.” http://gawker.com/5990395/if-you-wear-googles-new-glasses-you-are-an-asshole

[10] The differentiation involved might be very fine-grained indeed. Users may interact with informational objects that exist only for them and for that single moment.

[11] The first widespread publicity for Glass coincided with Google’s release of a video on Wednesday, 20th February, 2013; The 5 Point announced its ban on 5th March. The expressed concerns center more on the device’s data-collection capability than anything else: according to owner Dave Meinert, his customers “don’t want to be secretly filmed or videotaped and immediately put on the Internet,” and this is an entirely reasonable expectation, not merely in the liminal space of a dive bar but anywhere in the city. See http://news.cnet.com/8301-1023_3-57573387-93/seattle-dive-bar-becomes-first-to-ban-google-glass/

The canonical smart city: A pastiche

Consider this a shooting script for one of those concept videos so beloved of the big technology vendors. If you find my reading here tendentious, I can assure you that every element of the scenario I present here has been drawn directly from the website copy or other promotional literature of IBM, Cisco, Siemens, Living PlanIT, Gale International (i.e. Songdo) or Masdar.

Daybreak on a Wednesday in April, sometime in the first third of the twenty-first century. The lights come up slowly in Maria Villanueva’s condo, forty-seven stories up the side of the soaring Phase III development. It’s a few weeks past the first anniversary of Maria’s arrival in Noblessity, and in some ways she’s still getting used to the way she lives in this brand-new city of ten square kilometers, so recently and famously reclaimed from the ocean itself.

Her building, for example: a daringly helical twist of stacked apartment units, devised by a name-brand Danish architectural practice. Back home she could never have afforded to live in anything remotely like this — and that’s if there even were buildings like this at home in the first place, which she doubts. This morning the active shutters, sensing a rare onshore breeze, have deployed microfilaments to trap the moisture in the air, softly hazing them at the edges so they seem to blur into the murky sunlight. Even the soft light that makes it through is too bright for Maria, though, and she clutches vaguely at bedside for her phone so she can launch the app that controls the windowshades.

Maria’s husband Mark left for work hours ago — he’s a lawyer negotiating EMEA rebroadcast rights for an American basketball league, and his teleconferences tend to happen on Los Angeles time. So on this Wednesday morning, she finds she has the apartment to herself. She drags herself from bed, shouts for the kitchen to fix her a latte and heads to the en-suite bathroom.

Headlines stack up on the mirror, and Maria scans them as she blowdries her hair: “Climate talks enter a third fruitless…guest-worker privileges revoked following…Royal scandal erupts as Mail drone captures…” None of this seems like it will immediately bear on her work, and just as quickly as the headlines arrive she dismisses them, with the mere swipe of a fingertip.

The walk-in closet has an app to choose outfits appropriate to the weather, but the weather’s always the same here — punishingly hot and dry outside, and invariably a comfortable 72º everywhere that isn’t. Maria has never once launched the app. She gives herself a last quick once-over in the full-length, pats down a few vagrant strands of hair, and then it’s off to work.

Maria belongs to an elite team of analysts tasked with riding herd on autonomous trading algorithms for a City of London-based financial concern. After a solid six months in which she made a newcomers’s show of diligence, she’d rather gotten used to the luxury of working from home most days of the week, but in the interests of team cohesion senior management has just issued a policy forbidding this. And so once again she finds herself faced with the necessity of a twice-daily commute between the ranked condos of the residential zone and the supertowers of the Central Business District.

This is not, as it happens, a huge imposition. The mobility fee is included in her compensation package, and actually, the drive isn’t so bad; depending on traffic and the precise route chosen by the car, it takes anywhere from ten to fifteen minutes. Maria knows from experience that if she calls the car service as she walks out the front door of her unit, her car will be pulling up under the porte-cochère just as she gets there. And so it is this morning, the elevator, as always, alert to the patterns of movement within the building and therefore empty of anyone else. She momentarily realizes she’s forgotten, again, to shut off the lights in the closet, but it doesn’t matter; but for the low-level autonomic systems, everything in the condo fades to black thirty seconds after the unit detects a lack of human presence.

The briefest blast of desiccating heat, and then she’s safely into the car. Today’s car is a little funky, a little foul — not so much that somebody had actually smoked a cigar in it, but maybe that it had recently been used by somebody who smoked a lot of cigars. And used rather too much cologne. Maria punches the air conditioning to its highest setting and tries to breathe through her mouth.

There’s apparently been a fender-bender on the Grand Axial, and the car is rerouted around it without so much as a peep. And so Maria finds that her way to work this morning takes her via the Coastal Ringway, past the three enormous pipelines that supply Noblessity with fresh water from the mainland. This is provided by the host nation at no expense, for the duration of the developer’s 99-year lease on the land — just one of the many ways the host nation expresses its gratitude for the massive infusion of talent and capital sitting just offshore. Of course it’s been awhile since Maria crossed the causeway; truth be told, she only does so on her way to or from the airport. But she keeps meaning to drag Mark over for a visit, get a taste for how the people here really live, and one of these weekends she’s sure they will.

Just past the ten-story screen that fronts the Museum of Contemporary Art, as the car passes beneath the overway heralding entry into the CBD, the windshield starts to pulse red. The soft bonging of an awareness alert issues from the dashboard, and there is the slightest sideways lurch as the car moves to put some distance between itself and a disturbance rapidly approaching in the curbside lane. On the sidewalk ahead, a man in the yellow coveralls of a guest worker is visibly struggling with two Public Safety men. The windshield overlay has identified him as a PDP, or Potentially Disruptive Person. Ever since the bombings in Rio, of course, everyone’s been a little bit on edge, and feeling the slightest bit guilty that she’d ignored the headline earlier in the morning, Maria taps a finger on the windshield for more information. The public scanners have registered an unidentifiable, roughly weapon-sized object under the man’s clothing; and this, correlated with his location and immigration status, is surely enough to trip the threat-detection algorithm’s probability threshold.

But they’re barely abreast of the disturbance before a Public Safety van has whisked up to the curb, and amid a sudden bloom of khaki PS uniforms the guest worker is hustled in and away. Maria’s car torques up with the silent immediacy of electric drive; with a quick and almost subliminal sigh, she releases the tension she barely knew she was carrying, and the unpleasantness rapidly dwindles in the rearview mirror.

Before long the car glides to a halt in front of the Bourse, and the door pops open to let Maria exit before heading off to its next booking. Maria places great stock in mindfulness, so today as every day she takes a moment to pause for a moment, breathe, and contemplate the massive visualization that pulses across the entire width and breadth of the façade. It’s hard to make out in direct sunlight, but if you shield your eyes and look carefully you can see how the whole surface of the building shimmers with graphics representing real-time trading activity.

At this hour, it’s still last night in Chicago and New York, and half a day yet before the London and Frankfurt exchanges open. So the activity dancing across the façade is all the Nikkei, the Hang Seng and the CSI 300…and the blips of an algorithm she and her colleagues have dubbed Dirty Frank, leaving its bizarre and so-far unfathomed spoor of stochastic trades across the minutes.

The view on Maria’s desk, of course, is more sophisticated by far than the poppy visualization splashed across the façade. Her job is to reverse-engineer algorithms like Dirty Frank, determine the logic driving each one, and help her firm develop tactics to counter them. The few hours of morning work pass quickly, as work always will for someone who is paid well to do what she’s good at, and loves what she is paid to do, and lunchtime rolls around before she knows it.

Everyone knows how awkward it can be to socialize with folks working in different backgrounds, so Maria’s agenda app has booked her for lunch in a restaurant rated highly over the past six weeks by people whose activity on Noblessity’s resident-only social network suggests a high degree of compatibility. But when she gets out onto the Plaza, she finds it unusually, even alarmingly crowded, and asks one of her building’s uniformed concierges if he knows what’s going on.

It seems a private shopper for one of the luxury boutiques on the Skydeck level, deputized to serve one of the members of the boy band that played the Performing Arts Center last night, has uploaded a brief video of her charge shimmying into a tight new pullover — and of course the time- and location-stamped video has gone viral locally. In the fullness of time the shopper will be fired, doubtlessly, but the damage is already done. A lengthening line of cars waits to disgorge passengers at each of the bays around the plaza’s perimeter, and the walks and overways are perceptibly starting to fill with giddy young women.

The mast-mounted cameras high above Bourse Plaza have, of course, identified the potentially troublesome concentration of pedestrians, just as roadbed sensors register the increased traffic load and flag it for immediate attention. It’s just after shift change in Noblessity’s Intelligent Operations Center deep beneath the streets, and the fresh crew is quick to respond to the emergent condition – except for special occasions like the annual Jazz Festival, management likes to keep densities in the CBD low, and the oversight team’s contractual performance incentives depend on keeping the sidewalks at Level of Service C or better.

Ordinarily, of course, this isn’t an issue; between the oppressive heat and the long, triumphal blocks, nobody tends to walk very much or very far in Noblessity. Thanks to the private shopper’s indiscretion, though, today is shaping up to be different. Traffic on the sidewalks has started to thicken, contraflow movement is beginning to be difficult, one or two leading indicators of social distress have started to show up on the Big Board. It’s little more than threshold activity at this point, but if nobody issues a command override, active countermeasures will be deployed…and mindful of those incentives, nobody does. Up go the bollards around the plaza, down go the gates on the overways, and one after another, all of the signals turn green on all of the routes leaving the area.

Maria finds herself rerouted for the second time this day, this time on foot. Her phone runs a few quick calculations against her standing parameters and winds up recommending a trattoria-style Italian place she’s never thought to try before, just the other side of the World Expo Center — happy serendipity. Of everything on the menu, there are only a few options lit up on the tabletop as falling within her current diet guidelines, but the Caesar salad she chooses is delicious. The ten-minute walk back to work mostly takes her through temperature-controlled spaces, while between them the gorgeous, ethnic-inspired patterns of the active brise-soleils have unfolded to shield the walkways from the worst of the noonday sun. Even the more visible crowd-dispersion measures have faded back.

By the time Maria calls it a day, the East Asian markets are long closed, but NASDAQ’s just getting started. With a brief series of taps, she formally passes operational responsibility to her New York-based colleagues, and puts her desk to sleep. Her drive home is daydreamy, if a bit subdued — the billboards along the route all seem to be down, and she watches them drift by in a succession of vivid frames the color of clear sky.

After she’s changed into workout clothes, Maria orders a car to the Recreation Zone. Despite the heat, she loves to run along the manicured paths set between the lakes and fountains, to measure her progress against the countersunk lighting pavers. At the entrance to Oceanside Park, a two-man construction crew with a miniature backhoe is digging up the sensors they emplaced just last year — management has sourced a newer model, cheaper and more capable. True to every word of the promises the headhunter made, Noblessity is continuously in the process of being upgraded.

As Maria huffs around the outer loop, her sunglasses keep a running tally of the calories she’s burning, representing them as a blue line climbing diagonally across her peripheral vision. As the blue of her efforts finally begins to track the green of the optimal curve set by her company’s employee wellness plan, she feels a tight glow of satisfaction well up inside her. A brief flourish of trumpets in her earbuds and an animated burst of fireworks means she’s unlocked a mileage target achievement. This will mean new options at dinner for sure.

The original plan for the evening was to meet Mark for dinner at the new robata grill on the garden level of Entertainment Sector South. But just as she turns into her final lap, Maria’s sunglasses light up with a call. It’s Mark; it turns out that he’s exhausted from what has been a long and arduous day of strategy sessions, and feeling pretty burnt out herself, they decide to meet up at home and order in. She knows from experience that she won’t even need to call for a car — the service’s adaptive load-balancing algorithm knows the fall of darkness will always mean a line of people who need rides home from the park — and the condo is mere minutes away.

Of the many amenities provided by her building, among Maria’s very favorites is the one she now avails herself of: ordered meals, like care packages from home and other deliveries, are deposited in the autolocker, so she doesn’t even need to deal with the delivery boy. Mark orders with a few taps on the kitchen screen, and they catch each other up on their respective days during the twenty or so minutes that go by before the autolocker chimes to announce the arrival of their dinner. They grab a few napkins and their containers of food and settle back on the couch to buy a movie from the wallscreen.

Before it’s even a third over, though, Maria realizes with a start that she’s started to nod off. She plants a kiss on the top of her husband’s head and pads off to bed. Just as she slides between the sheets, the briefest prayer of acknowledgment escapes her lips, a prayer of gratitude for another day of health, profit and productivity, another day in balance, another day in Noblessity.

Responsibility in technology reportage: the case of Talking Points Memo

The subject of this post may be rather obscure, particularly for those of you who are not from the United States, or do not pay attention to American political media. I hope you’ll excuse me, though, because I think it’s important to examine some of the ways that claims on behalf of the corporate use of information technology are normalized and made to seem natural by their treatment in the media.

My concerns here focus on Talking Points Memo, a political blog whose tendency, I think, it would be fair to describe as center-left by US standards (and center-right by those generally obtaining elsewhere). Over the past year or so, under the leadership of site founder and editor Joshua Marshall, TPM has been seeking to broaden its coverage beyond the party-political, with the clear ambition of supplanting brands like the dying Newsweek as a trusted general-news outlet. The site continues to position itself as “the premier digital native political news organization in the United States,” but I’m willing to bet that “political” isn’t destined to remain there forever. This is a site with its eye on the main chance.

Part and parcel of this effort has been a significant expansion into science and technology reportage, both handled by a TPM staffer named Carl Franzen. Ordinarily, I would welcome a political site — especially one as associated with the notion of rigorously-vetted crowdsourced investigative journalism as TPM — taking on the responsibility of covering a topic as salient to our choices in everyday life as emergent technology, but what I’ve seen so far doesn’t begin to measure up to my expectations.

In fact, it’s hard to how overstate how disappointed I am with the quality of TPM’s technology coverage. In most articles appearing under Franzen’s byline, you’ll note, the content of a press release or a sympathetic interview is transcribed word for word into the TPM post, lending the site’s imprimatur to whatever claims that are being made by the article’s subject. At no time does Franzen appear to challenge what he’s being told, seek any other informed perspective, or simply attempt to validate a proffered representation as factually accurate.

The most recent example of Franzen’s credulity is an almost perfectly ahistorical post accepting Google’s claim that their prototype Field Trip app somehow constitutes an example of “ubiquitous computing”; indeed, the piece comes perilously close to crediting Google with inventing ubiquitous computing in the first place. (And yes, those of you familiar with the ubicomp discourse will not in the slightest be surprised to learn that in among the hype recapitulated by Franzen is the inevitable claim to offer a “seamless” experience.) Note that Franzen allows Google VP John Hanke 163 words: over half the length of his 299-word post.

Here, in a piece entitled “Cooler Than Facebook” — and how the marketing department must have loved that — Franzen makes a pitch on behalf of Google Plus:

In the near future, social networking may involve navigating a stylishly animated Google Plus on your desktop computer while resting comfortably in a chair a few feet away, using your smartphone as a remote control.

What is this but a unchallenged, unexamined and limpidly transparent paraphrase of a Google team’s own description of their demo? It’s practically Eisenhower-era in its depiction of benevolent corporate forces deployed on behalf of your convenience and comfort. (“Resting comfortably in a chair,” you say? Why, Top Men are working on it even as we speak!)

It’s not just Google that gets this treatment. Here Microsoft “bring[s] the ability to accurately scan 3D objects to the masses,” with their “eye-popping, incredibly detailed” Kinect Fusion offering. And here is a selection of other Franzen pieces that read like press releases: for Barnes & Noble, eBay, Tesla…these, mind you, are just from TPM’s technology coverage over the last sixty days.

I think you may be beginning to sense a pattern here, no? From my perspective, though, the most galling example of Franzen’s work is probably this piece on Control Group, which not merely reads like the kind of flackery you find on PR NewsWire, but does so on behalf of some particularly pernicious claims.

It’s not just that Franzen’s gee-whiz tone is annoying, although it does annoy me. It’s the willingness to carry water for an agenda that would certainly be sinister if it had not been so thoroughly debunked over the past twenty years. Consider this unquestioned statement from Control Group CEO Campbell Hyers:

[I]n a corporate environment, you’d be able to swipe your badge and instantly have a conference room itself invite all of the right participants to the meeting and bring up the right slides on a projector screen and then log the whole conference as an audiovisual file later.

A more knowledgeable reporter would have spotted that Hyers’s pitch, far from being futuristic, is actually a string of clichés reaching straight back to Mark Weiser‘s 1990s tenure at PARC (and, at that, long problematized). This knowledge is somewhat arcane, of course, and it may not be particularly realistic to expect a cub reporter to have immersed him- or herself in the detailed history of the field being covered. But surely a more diligent reporter might have reached out to known sources of insight in that field, and attempted to vet the essential contours of the story he or she was being told. And that’s without touching the airless, hegemonic notion that conference rooms and employee identity badges and PowerPoint presentations are the natural order of things.

Franzen manages to accept at face value all of the claims made about the company’s putative “operating systems for physical space,” in a way that’s curiously at odds with TPM’s ostensible progressive agenda. (In fairness, the problems with Franzen’s coverage precede his arrival at TPM. Here’s an older, similarly breathless piece he contributed to Atlantic Wire.)

And it’s just that tension — between the latent logic of so many of these pieces and anything we might fairly think of as progressive politics — that prompts me to write this. I don’t pay much attention to the gadget-oriented technology blogs, with their pong of adolescent-male wish fulfillment, and I certainly can’t abide the Valley-centric tech industry coverage of other “technology” sites. But I don’t expect insight or critique from either of these directions — in fact, I’d be foolish to do so. By contrast, I surely do expect it from a site that not only, in every other realm in which it operates, upholds the honorable tradition of investigative journalism, but clearly does so in the name of a particular kind of politics.

I’m not asking that Talking Points Memo transform itself into, say, the New Left Review. But questioning the logic of the arguments that are made before the public, seeking alternative perspectives: these functions are both core to TPM’s mission, and key to the value it represents itself as providing to its audience. Lending its hard-won imprimatur to transparent PR and marketing tripe — on not a few occasions, again, literally word for word — not merely does not establish any new domain of credibility, it undermines whatever reputation for independence and quality the site currently enjoys. Franzen and, by extension, Marshall’s site are getting played. They’re being used. They would resent it, howlingly, from a corrupt Congressman or a racist sheriff, and they ought to resent it every bit as much from corporate flacks and clueless technoutopians.

What’s worse is that, given contemporary habits in media consumption, it is not at all unlikely that Franzen’s is the only coverage of the technology sector TPM’s core audience will be exposed to. TPM’s embrace of his work could all too easily lead otherwise-sophisticated readers to believe that viewpoints like the ones expressed in Carl Franzen’s writing are fully normalized and universally agreed-upon — if not, god forbid, the leftmost marker of acceptable opinion. This is precisely how consensus realities are established, how discourse policing works; if “even the left-leaning Talking Points Memo” endorses a point of view, anyone quibbling with it is by definition outside the bounds of the discursive community, and of fair comment. Like any publisher, in other words, Marshall has some responsibility for anticipating how the color of approval his act of publication lends to things is likely to be used, particularly by those ideologically unsympathetic to his other aims.

The old feminist adage reminds us that “the personal is the political,” and it’s precisely the same here: every technology comes with a conception of our role in the world bundled in it. It’s vital, particularly for those of us who think of ourselves as somehow being “on the left,” or in any way working toward a progressive agenda, that we ask how technologies can serve ends inimical to whatever goals we believe are worth the effort. And it’s unquestionably the prerogative of a would-be independent news outlet to apply to ostensibly innovatory products and services some standard of evaluation deeper than whether or not they are “cool.”

My bottom line is that I find the tone, tenor and, most importantly, the content of Franzen’s coverage sharply at odds with the progressive tradition I interpret Talking Points Memo as trying to uphold. I recognize some of the shortfalls in his work as the clear consequence of the intense pressure on an online outlet to publish, on an online writer to make word count. But that pressure doesn’t justify outright stenography. If Talking Points Memo is not willing or able to bring the exact same level of discernment, skepticism and professionalism to their technology coverage that Marshall would demand of any political coverage appearing under the site’s name, perhaps they ought to consider stepping back from the ambition of offering that coverage.

Voices that don’t matter all that much

This post is primarily intended for authors, and those who intend to become authors, especially if your area of interest is broadly technological. It’s about choosing a publisher wisely — or, more to the point, about the perils of not doing so.

As many of you know, in 2005 I started framing out the book that would eventually see the light of day as Everyware. As a first-time author with no track record, proposing a speculative work on what was then still very much an emergent area of practice, I assumed that there would be, at best, limited publisher interest in my pitch. So I settled for the first one willing to invest in my proposal, the New Riders imprint of Berkeley-based technology house Peachpit Press. (You should know that Peachpit is itself a subsidiary of the SOPA-supporting Pearson Education, but that’s a story for a different day.)

In retrospect I clearly could and should have held out for not merely a different publisher, but a different kind of publisher. New Riders has never had the foggiest idea what to do with Everyware, from the original editor they assigned to the book — a mommyblogger! — to this entirely-serious proposal for the cover to the slapdash way they handled converting the book into an electronic format.

A lot of this, in naked point of fact, is nobody’s fault but my own. I chose poorly. That’s all on me, and properly so; consider me chastened by the experience.

But New Riders continues to have responsibility for Everyware, and they continue to serve it poorly, in ways that undermine its chances of making money for them. There’s absolutely no excuse for this kind of thing. What’s that? That is how Everyware shows up on Readmill, an exciting new social-reading application. That’s how your book would show up on Readmill, too, if you entrusted its publication to New Riders.

You see the way there’s no cover image for the book, like there is for every other book on the service? You see the way Readmill thinks “Mobipocket” is part of the book’s title? These artifacts are not Readmill’s fault. Nor are they Amazon‘s, or any other vendor’s. They’re part and parcel of the way the publisher has hamfistedly treated the digital edition: as an afterthought, as something not even worth the few minutes’ effort fixing these blunders would have required.

None of this might have mattered, particularly, in the days when digital books were niche propositions. But given Everyware‘s subject and target audience, I have to imagine that the overwhelming majority of people who’d be interested in the book in the first place would be inclined to engage it digitally. Wouldn’t it make sense to treat these people — these paying customers — not like second-class citizens, but like the valued, appreciated readers they are?

Like I say, I’ve learned my lesson. But if there are any among you who are contemplating authorship, please try to profit from my mistakes. Seek a publisher who understands and will support your work — and, just as importantly, who displays some capability and intention of investing in you. If you can’t find a publisher who meets this description, better you launch your title yourself. You have Kickstarter, you have Amazon, you have a ton of great tools and distribution channels that didn’t exist or weren’t fully robust even a year ago.

Trust me on this. New Riders may well be a poster child for everything that’s wrong with the publishing industry, but they’re not alone. If you believe in your ideas and have invested effort and craft in expressing those ideas in the form of a book, you deserve better…and so does your book.

Neopanoptical

We’re all familiar with the Panopticon, right? The notional prison devised by the eighteenth-century English utilitarian Jeremy Bentham?

No? OK, let me gloss it for you, and people for whom this is a familiar story will forgive me and, I’m sure, point out my mistakes of fact, emphasis or interpretation.

Bentham imagined a prison built in the form of a gigantic ring, with cells by their hundreds disposed around its inner wall. In the very middle of the structure’s central void stood the prison’s sole watchtower, atop which he placed a guard shack with 360-degree visibility.

How to maintain control over the prisoners with but a single tower and a relatively small cadre of guards? For all its formal ingenuity, Bentham’s real innovation was this: the cells lining the periphery were to be brightly illuminated at all times, while the guard tower itself was never lit. The guards were therefore free to observe activity in any cell, at any moment…while the contrast between their brightly-lit cells and the watchtower’s mute windows meant prisoners could never be certain if the guards were observing them, someone else or no one at all. (In principle, the prison administration could go a step further and achieve the same docilizing results without even staffing the tower. How would the inmates even know? After all, they were, and would remain, literally in the dark.)

And there was one final visibility-related wrinkle. The prison would be sited on a hill just outside of town, always there as a vivid reminder that any trespass of the social order would come at a price.

Bentham called his device the Panopticon, and the twentieth-century philosopher of power Michel Foucault famously used it as a jumping-off point for his own dissection of the ways surveillance, visibility and discipline work in contemporary society. One of Foucault’s arguments was that over time, this internalization becomes an entirely unconscious process, that we carry disciplinarity into the ways we move, speak, act and hold our bodies.

We can see this at work on the most literal level in the way we react to the presence of surveillance cameras. An ordinary CCTV camera’s gaze is directional. It sees you, but you see it seeing you. And should you be interested in evading its gaze, you’re free to tailor your actions accordingly.

As Anna Minton notes, though, in last year’s invaluable Ground Control, the simplest possible material intervention — housing the selfsame camera under an opaque polycarbonate dome, costing at the very most a few tens of dollars — achieves precisely the same innovation as that Bentham placed at the heart of Panopticon. Once the mechanism itself is screened by the dome, anything you do in the 360-degree field around it is potentially in its field of vision. You’re no longer quite certain whether you’re actually under surveillance at any given moment — in fact, there needn’t even be a functioning camera under the dome at all — but are in the interests of prudence forced to assume that you are. You’re compelled to internalize the sense that you’re being watched.

Domes are cheaper than cameras, but of course signs are that much cheaper still; I often suspect that the big yellow notice warning me that I’m under CCTV surveillance is unaccompanied by any actual gear to speak of. What could possibly be a more effective deterrent than the watcher that can’t be seen at all?

What’s the harm in all of this neopanopticism? While there have been cases in which this latent apparatus of control has proved decisive in bringing criminals to justice, or at the very least provided us with a few moments of lulzy fun, longer-term statistical analysis paints a different picture. London’s Metropolitan Police admits that CCTV imagery was used in the resolution of less than four out of every hundred crimes. All that watchfulness may be having some effect on behavior, but it sure isn’t buying the public any particular increment of personal safety.

Minton points out that long-cherished civil liberties may not be the only thing being damaged by the presence of CCTV. She compares Britain with CCTV-free Denmark, and from her review of the available data concludes that pervasive surveillance is actually counterproductive. (The conjectured causative mechanism: because people feel that the implicit presence of supervisory authority makes someone else responsible for dealing with crime, they tune out the incidents they witness, or otherwise choose not to intervene.)

In practice, technologies like CCTV surveillance are always exceedingly difficult to weigh in the balance, the more so when technical developments like doming change the envelope of affordances and constraints in which they operate. The complications are redoubled when those of us who are concerned with public space can only wield dry abstractions like “civil liberties” against hot-button appeals and the human reality of victimization. In this light, it’s not unreasonable to argue that some loss of anonymity is acceptable if it meant the capture and punishment of muggers and rapists and hit-and-run drivers. (I wouldn’t happen to agree with you, personally, but it’s not an outright ridiculous belief to hold.)

But we should be very clear that that’s the trade-off we’re being offered. Furthermore, proponents of technologies like CCTV should also be conversant with — and forthright about — the potential for mission creep inherent in them. Systems already deployed are turned toward unforeseen uses; frameworks we already recognize (and therefore, we reckon, understand sufficiently well) are endowed with entirely new potential as easily as you’d blow new firmware into your phone or digital camera. And this happens every day: when we were in Wellington, for example, we were told that the surveillance cameras that voters approved to help manage traffic congestion had been repurposed for crime prevention, without a corresponding degree of public consultation.

Let the image stream coming off of them be provided with a facial-recognition algorithm, and you’ve got an entirely different kind of system on your hands, with entirely different potentials and vastly expanded implications. Yet the cameras, domed or otherwise, look no different from one day to the next. How are people supposed to inform themselves, or avail themselves of their existing prerogatives, under such circumstances?

And all of this is still confining our discussion to the visual realm! Yet the real relevance of this neopanoptical drift will only become obvious to most of us as more data is gathered passively in public space, through location-aware devices, embedded sensors and machine inference built on them. It’s these developments which will, as I’ve argued elsewhere, “permanently redefin[e] surveillance,” and it’s these that I’m more worried about than any simple plastic dome. If we don’t get a collective handle on what disciplinary observation means for our polities and places now, we’ll be in genuine trouble when that observation gets infinitely more distributed and harder to see.

Of lucky cats, lameness and game-like logics

So of course Russell’s spot-on here, about the terrible things that await us as poorly-considered game-like logics are superimposed over everyday life. He never comes right out and says it, but I assume he’s reacting to Jesse Schell‘s recent epiphany about networked life, gaming tropes and the motivational mechanics they afford when brought together, and maybe the recent popularity of Foursquare, with its badges and mayorships.

Schell’s argument (or one of them, anyway) is that the everyday environment is now sufficiently instrumented and internetworked that the psychological triggers and incentives developed by game designers to motivate in-game behavior can be deployed in real life. A poster on MetaFilter puts it in a nutshell: “points for brushing your teeth, doing your homework, eating your cornflakes. Gain levels for riding the bus instead of driving. Net-integrated sensors in every device to keep track of your score and upload them to Facebook or wherever. Tax incentives if you get a good enough score on your kid’s report card or read the right books.”

And this is more than passing scary, because these motivators work. Just as food designers have figured out how to short-circuit our wetware with precisely calibrated doses of fat, salt and sugar, game developers trip the dopamine trigger with internally-consistent, but generally otherwise worthless, symbolic reward systems. That they’ve (knowingly or otherwise) learned how to play this primordial pathway like a piano is attested to by the untold gigahours gamers worldwide spend voluntarily looping out the most arbitrary actions, when most of them presumably have a choice of other pretty swell things they could be doing. Like, y’know, their partners.

What happens when incentive mechanics like this leak out of gamespace and into the world? In the long run it may be for the best that ad agencies remain so densely provisioned with the manifestly unclued, because this way of doing things would be nothing short of terrifying in the hands of someone who knew what they were doing. The short term picture, though, is clearly less reassuring; as Russell puts it, “we’re going to encounter a bunch of crappy sorta-games foisted on us.”

You think he’s jumping the gun, assuming the worst, maybe being a little hyperbolic? Ladies and gentlemen, I give you Exhibit A.

But fortunately, there are other games to be played, much cleverer and more interesting ones. Bruce Sterling offered a lovely vision of networked rewards in the real world in his 1998 short story “Maneki Neko.” The story has dated badly in some ways — in a precise inversion of what came to pass, it’s amusing to see the story’s Japanese wield sleek, protean “pokkekons” while their clunky American counterparts suffer with clunkier Silicon Valley PDAs — but in other ways it’s clear that Bruce had the notion sussed.

His depiction of a sweetly networked gift economy, in particular, makes the Schellian universe look tawdry. “Maneki Neko” would seem to argue that you don’t need “points” and meaningless achievements unlocked to motivate behavior, when enlightened self-interest and the joys of participating in reciprocal agalmics are sufficient.

I think we could all see it coming the moment Schell’s DICE2010 talk went up on the technology blogs. “See”? You could practically smell the agency nation bruising its collective index finger on the mouse key as it raced to scrub through the half-hour video in search of bullet-pointable content for the next morning’s PowerPoint. Russell’s probably being too generous by half: I think we’re in for a Laird Hamilton-sized wave of pointlessness, as too many not-bright-enough parties fall all over themselves trying to enact and deploy incompatible, mutually incoherent Schell-style solutions.

In some ways, it really is too bad. Given that vice is generally its own reward, that they need to be incentivized at all suggests to me that there’s nothing inherently wrong with most of the behaviors such structures are designed to motivate. For that matter, I tend to be favorably inclined toward any incentive system that begins, however tentatively, to jimmy our lives from the grip of the money economy. I just wish fewer people had described Schell’s video enthusiastically, as “the most mindblowing thing I’ve seen all year,” and more as “something potentially troubling, that we need to think carefully about.”

Because the dopaminergic system can be an inhumanly powerful force, beside which all our notions of “will” are laughable, and where it can take a person is not at all pretty. I just don’t like thinking of it as a tool available to someone bent on designing my life for me. And with all due respect, especially not to a community dedicated to the proposition that “reality is broken [and] game designers can fix it.

That’s a heavy place to wind up, and here I’d intended this post to be both briefer and lighter. But maybe some of these notions could do with a bit of taking seriously.

Follow

Get every new post delivered to your Inbox.

Join 1,067 other followers