Archive | Know your enemy RSS for this section

On counter-hegemony, or: “I got it! We’ll have them write hit songs.”

At the moment, I’m neck-deep in my Verso stablemates Nick Srnicek and Alex Williams’s still-newish book Inventing the Future; things remaining more or less stable schedulewise, I’ll most likely finish it later on today, or tomorrow at the latest.

It’s a strange book, Inventing. You may have caught some of the buzz around it, and that buzz exists for good reason. (It’s not just the superspiffy totebags Verso had ginned up for it, though I’m sure those do not hurt one whit.) At its heart a passionate argument against work and for an end to neoliberalism and its reality control — forged along the same rough lines as those Paul Mason and the Fully Automated Luxury Communism kids are currently touting — Inventing is a genuinely curious mixture of crystal-clear analysis, righteous provocation and infuriating naivety. If you’re even remotely interested in what emergent technologies like machine learning and digital fabrication might imply for our capacity for collective action, and especially if you think of yourself as belonging to the horizontalist left, you should by all means pick it up, read it for yourself and form your own judgments. (Here’s Ken Wark’s take on it; I endorse most of his thoughts, and have a great deal of my own to add, which I’ll do in the form of my own forthcoming book.)

Late in the book there’s a passage concerning the stance Srnicek and Williams feel the postcapitalist left needs to adopt toward the mainstream media: if the “counter-hegemonic” project they describe is to have any hope of success, they argue, “it will require an injection of radical ideas into the mainstream, and not just the building of increasingly fragmented audiences outside it.”

Well. It must be said that this is not one of the book’s high points. In its latent suggestion that the only reason Thomas Piketty and Donna Haraway aren’t cohosting a lively, popular Sunday-morning gabfest on NBC right this very moment is because we, the progressive public, are somehow not trying hard enough, or have failed to sufficiently wrap our pointy heads around the awesome conditioning power of the mass media, in fact, it’s somewhere between irritating and ridiculous. (It’s hard for me to see how Srnicek and Williams’s argument here is substantively any different from that stroke of market-savvy inspiration the beloved but famously marginal Minutemen skewered on the cover of their second-to-last album. And now you know where the title of this post came from.)

Nevertheless, they’re onto something. Though that more-than-faintly patronizing tone never quite dissipates, S&W eventually find themselves on far firmer ground when they argue that “[l]eftist media organizations should not shy away from being approachable and entertaining, gleaning insights from the success of popular websites.” I was able to shake off the momentary harrowing vision I had of Leninist Buzzfeed, and press on through to what I take to be their deeper point: radical thought can actually resonate broadly when care is taken to craft the language in which that thought is expressed, and still more so when insular, self-congratulatory obscurity is avoided in the design of its containers. I endorse this notion wholeheartedly. This recent appreciation of Jacobin hits many of the same notes; whatever you think of Jacobin‘s politics, it’s hard to deny that its publishers consistently put together a sprightly, good-looking read. (I’d call it “the Monocle of the left,” but that would be to imply that Monocle‘s content is far more compelling than in fact it is.)

You might still argue that S&W ought to spend a little more time with McLuhan. My own feeling is that there’s more to distrust about the “mainstream media” than merely its overtly political content — that consuming information in the form of tweets, listicles, Safety Check notifications, screens overloaded with crawlers, and possibly even glowing rectangles themselves is hard to square with the kind of awareness I at least find it necessary to cultivate if I’m to understand anything at all about the way the systems in which I’m embedded work.

But ultimately, these are quibbles. I agree with S&W when they argue that overthrowing the weaponized “common sense” of the neoliberal era is an explicitly counter-hegemonic project; that developing a functioning counter-hegemony is something that requires longterm commitment; and that those with truly radical programs need to reconsider the relationship between “pop,” “popular” and “popularity” if that whole hearts-and-minds thing is ever going to work out for them. (I’m honor-bound to point out that Saul Alinsky said as much fifty years ago, but perhaps that too is a quibble.) So: no. I have no problem at all with presenting complex and potentially challenging ideas accessibly, so long as they can be rendered accessible without dumbing them down. If successful counter-hegemonic media looks a whole lot more like a Beyoncé video than some preciously anti-aesthetic art installation, so much the better. Bring on the hit songs.

Further notes on the quantified self

We can surely read the various technologies of the quantified self as tending to “ensure that people continue to act and dream without any form of connectedness and coordination with others” (Stavrides), and this quick, cogent piece will only reinforce that sense.

Now, I can imagine a world — just barely, but it can be done — in which the capture of biometric measurements by a network with qualities of ubiquity and persistence was somehow not invidious. I can even imagine a world in which that capture resulted in better collective outcomes, physically, psychically and socially. But in our world, the one we actually live in, I think the very best we can possibly hope for from these technologies is positive-sum competition, a state in which each of our individual outcomes only improve for the fact that we are set against each other.

That’s the best-case scenario. Even that is still competitive, still oriented solely toward the individual, still only bolsters the unquestioned supremacy of the autonomous liberal subject. And far more likely than the best case, frankly, is the case in which data derived from these devices is used to shape life chances, deprive us of hard-won freedoms at work, mold the limits of permissible expression or even bring violence to bear against our bodies.

My bottom line is this: Though I’d be happy to be proven wrong, given everything I know and everything I’ve seen it is very, very difficult for me to imagine socially progressive uses of quantified-self technologies that do not simultaneously generate these easily foreseeable, sharply negative consequences. It may be my own limitations speaking, but I can’t see how things could possibly break any other way. In this world, anyway.

VR: I’m frankly surprised they admitted this out loud

Wagner James Au, who would know, has what in a better world would be an incendiary piece in the latest Wired. Au’s piece lays it all right out there, regarding the meaning and purpose of virtual reality.

As VR’s leading developers straight-up admit in the piece, its function is to camouflage the inequities and insults of an unjust world, by offering the masses high-fidelity simulations of the things their betters get to experience for real. Here’s the money quote, no pun intended: “[S]ome fraction of the desirable experiences of the wealthy can be synthesized and replicated for a much broader range of people.” (That’s John Carmack speaking, for future reference.)

I always want to extend to those I disagree with some presumption of good will. I don’t think it’s either healthy or productive or pleasant for the people around me to spend my days in a permanent chokehold of high dudgeon. And I always want to leave some room for the possibility that someone might have been misunderstood or misquoted. But Au is a veteran reporter on this topic; I think it’s fair to describe his familiarity with the terrain, and the players, as “comprehensive.” So I rather doubt he’s mischaracterized Carmack’s sentiments, or those of Oculus Rift founder Palmer Luckey. And what those sentiments amount to is outright barbarism — is nothing less than moral depravity.

The idea that all we can do is accede to a world of permanent, vertiginous inequity — inequity so entrenched and so unchallengeable that the best thing we can do with our technology is use it as a palliative and a pacifier — well, this is everything I’m committed to working against. Thankfully there are others who are also doing that work, who understand the struggle as the struggle. Thankfully, I think most of us still understand Carmack’s stated ambition as vile. We do, right?

I’ll have more to say about the uses of VR (and its cousin augmented reality, or AR) shortly.


Jeremy Rifkin’s Zero Marginal Cost Society is a book that’s come up a few times in discussions here, and while I may have mentioned that I have multiple problems with it — its transparent assembly by interns, the guileless portrayal it offers of the Internet of Things, and particularly some of the lazy methods of argumentation Rifkin occasionally indulges in — it gets one thing so thunderingly right that it is worth quoting at some length.

The following is the best short description of the neoliberal evisceration of the public sphere between 1979 and the present I have ever come across. It resonates with my experience in every particular — and I’ve lived through this, seen it unfold on both sides of the Atlantic. If you were born anytime after, oh, 1988 or so, it will be very useful in helping you understand just what has been done to your world, and to you.

I’ll be honest with you: Sometimes I want to weep for what we’ve lost. Just the enumeration in the very first paragraph is almost overwhelming.

The Reagan/Thatcher-led economic movement to privatize public goods and services by selling off telecommunications networks, radio frequencies, electricity generation and transmission grids, public transport, government-sponsored scientific research, postal services, rail lines, public lands, prospecting rights, water and sewage services, and dozens of other activities that had long been considered public trusts, administered by government bodies, marked the final surrender of public responsibility for overseeing the general welfare of society.

Deregulation and privatization spread quickly to other countries. The magnitude of the capitulation was breathtaking in scope and scale. Governments were hollowed out overnight, becoming empty shells, while vast power over the affairs of society shifted to the private sector. The public, at large, was stripped of its “collective” power as citizens and reduced to millions of autonomous agents forced to fend for themselves in a marketplace increasingly controlled by several hundred global corporations. The disempowerment came with lightning speed, leaving little time for public reaction and even less time for public engagement in the process. There was virtually no widespread debate at the time, despite the breadth of the shift in power from the government to the private sector, leaving the public largely unaware and uninvolved, although deeply affected by the consequences.

For the most part, free-market economists, business leaders, neoliberal intellectuals, and progressive politicians — like President Bill Clinton of the United States and Prime Minister Tony Blair of the United Kingdom — were able to prevail by portraying the market as the sole key to economic progress and castigating critics as old fashioned and out of touch or, worse, as Soviet-style apologists for big government. The collapse of the Soviet empire, with its widespread corruption, inefficiencies, and stagnant economic performance was trotted out at every occasion as a whipping boy and proof positive that the well-being of society would be better assured by placing all the economic marbles in the hands of the market and letting government shrivel to the most rudimentary of public functions.

Large segments of the public acquiesced, in part because they shared a sense of frustration and disappointment with government management of goods and services — although much of the ill feeling was contrived by a business community anxious to penetrate and mine a lucrative economic largesse that had long remained under government auspices and beyond the reach of the market. After all, in most industrialized countries, publicly administered goods and services enjoyed an enviable track record. The trains ran on time, the postal service was dependable, government broadcasting was of a high quality, the electricity networks kept the lights on, the telephone networks were reliable, the public schools were adequate, and so forth.

In the end, free-market ideology prevailed.

After this rather brutal, unremitting account, it is true that Rifkin points us at the global Commons he perceives aborning as a legitimate source of hope. Let us, in turn, hope that he’s onto something. To quote someone I hold in the deepest contempt, there really is no alternative.

Couchsurfing: When sharing is theft

This is admittedly minor, but I find it rather telling. At the moment, I’m doing some research on the so-called “sharing economy” for my book, and in particular am digging into the background of the travesty that ensued when the founders of the Couchsurfing hospitality-exchange network chose to pivot it from something built on purely voluntary participation into a for-profit enterprise.

I hadn’t been to the Couchsurfing website itself for quite awhile — as in, the last time I visited, it was a .org. So when I first loaded it this time around, I was looking at it with fresh eyes. And maybe that’s why all of the images on the page that are ostensibly of satisfied Couchsurfers registered so oddly to me. You really can’t help but notice that, for self-submitted pictures of people from all over the world — and, at that, members of a site dedicated to free hospitality exchange — they seem unusually straightforward, consistent and professional in their composition and lighting.

Put more directly, they look like commercial stock photography. And that isn’t what you’d necessarily expect from a platform that theoretically prides itself on the strength and genuineness of the peer-to-peer relationships it enables. A few years ago, I would have had to wonder whether these images did in fact represent happy Couchsurfers; now, of course, we have Google Image Search. It only took me a few seconds’ clicking around to confirm what I had suspected — or actually, something even more troubling.

It’s not merely that these are not at all images of actual Couchsurfers; in itself, that might readily enough be forgiven. It’s that the images appear to have been downloaded, altered and used in a commercial context without their creators’ knowledge or consent — in one case, in fact, in direct contravention of the (very generous) terms of the license under which they were offered.

Here, let’s take a look:
– The image labeled “Jason” is one of photographer David Weir’s 100 Strangers, originally labeled with a copyright notice;

– “Dang” is a crop of commercial photographer Anthony Mongiello’s headshot of actor Stanley Wong;

– “Sonja” and “Gérard” are two of Chris Zerbes’ Stranger Portraits. While Chris does make his photos available under a Creative Commons license, they are clearly labeled that such use must not be commercial, that it must be attributed to him, and that no derivatives may be made from the original image. All of those provisions are violated here.

It’s bad enough that Couchsurfing would choose to use stock photography, when imagery of actual site members would tell a much more compelling story. But that they’ve chosen to gank images from hard-working photographers, to do so for commercial gain, without even a gesture at attribution? To me, that says a great deal about just what kind of “sharing” we mean when we talk about a sharing economy.

UPDATED: Couchsurfing has since removed the images in question, without otherwise acknowledging this post or my other attempts to communicate with them. For the record, such as it is, I enclose screenshots of the page as it previously appeared.



Uber, or: The technics and politics of socially corrosive mobility

We can think of the propositions the so-called “smart city” is built on as belonging to three orders of visibility. The first is populated by exotica like adaptive sunshades, fully-automated supply and removal chains, and personal rapid transit (“podcar”) systems. These systems feature prominently in the smart city’s advertising, promotional renderings and sales presentations. They may or may not ever come into being — complex and expensive, they very often wind up value-engineered out of the final execution, or at least notionally deferred to some later phase of development — but by announcing that the urban plan in question is decidedly oriented toward futurity, they serve a valuable marketing and public-relations function. Whether or not they ever amount to anything other than what the technology industry calls “vaporware,” they are certainly highly visible, and can therefore readily be held up to consideration.

A second order consists of the behind-the-scenes working of algorithmic systems, the black-box churn of “big data” analytics that, at least in principle, affords metropolitan administrators with the predictive policing, anticipatory traffic control and other services on which the smart-city value proposition is premised. These systems are hard to see because their operations are inherently opaque. While the events concerned are inarguably physical and material, they are far removed from the phenomenological scale of human reckoning. They unfold in the transduction of electrical potential across the circuitry of databases and servers, racked in farms which may be hundreds or even thousands of miles from the city whose activities they regulate. Such systems are, therefore, generally discernible only in their outputs: in the differential posture or disposition of resources, or the perturbations that result when these patterns are folded back against the plane of experience. At best, the dynamics involved may show up in data visualizations bundled into a “city dashboard” – access to which itself may or may not be offered to the populace at large – but they otherwise tend to abscond from immediate awareness.

The third order, however, may be the hardest of all to consider analytically, and this is because it is predominantly comprised of artifacts and services that are already well-assimilated elements of everyday urban life. Being so well woven into the fabric of urban experience, the things that belong to this category, like other elements of the quotidian, fade beneath the threshold of ordinary perception; we only rarely disinter them and subject them to critical evaluation. In this category we can certainly place the smartphone itself: a communication device, intimate sensor platform and aperture onto the global network of barely half a decade’s vintage, that has nonetheless utterly reshaped the tenor and character of metropolitan experience for those who wield one. Here as well we can situate big-city bikesharing schemes — each of which is, despite a certain optical dowdiness, a triumphant assemblage of RFID, GPS, wireless connectivity and other networked information-processing technologies. And here we find the network-mediated mobility-on-demand services that have already done so much to transform what it feels like to move through urban space, at least for a privileged few.

Inordinately prominent among this set of mobility brokers, of course, is the San Francisco-based Uber. So hegemonic is the company that its name has already entered the language as a shorthand for startups and apps dedicated to the smartphone-mediated, on-demand provision of services: we hear the Instacart offering referred to as “an Uber for groceries,” Evolux as “an Uber for helicopters,” Tinder as “an Uber for dating,” and so on. If we are to understand personal mobility in the networked city — how it works, who has access to it, which standing patterns it reinforces and which it actually does disrupt — it might be worth hauling Uber up into the light and considering its culture and operations with particular care.

It may seem perverse to describe something as “difficult to see” when it is so insistently, inescapably visible. To be sure, though, Uber’s sudden prominence is not merely due to the esteem in which its users hold it; the company has a propensity for becoming embroiled in controversy unrivaled by its peers, or indeed by just about any commercial enterprise, regardless of scale or sector. To list just some of the most widely reported incidents it has been involved in during the past half-year:

That any given mobility technology should become a flashpoint for so many controversies so widely dispersed over a single six-month period is remarkable. That all of them should involve a sole mobility provider may well be unprecedented. The truth is that we certainly do see Uber…but not for what it is. Its very prominence helps to mask what’s so salient about it.

What is Uber? Founded in 2009 by Travis Kalanick — a UCLA dropout whose only previous business experience involved the peer-to-peer file exchange applications Scour Exchange and Red Swoosh — Uber is a company valued as of the end of 2014 at some $40 billion, currently operating in over 200 cities worldwide. Like others of its ilk, it allows customers to arrange point-to-point journeys as and when desired, via an application previously loaded on their Apple or Android smartphones. All billing is handled through the application, meaning that the rider needn’t worry about the psychological discomfort of negotiating fares at origin or tips at their destination. Its various offerings, which range from the “low-cost” uberX [sic] to the super-premium UberLUX, are positioned as being more convenient, and certainly more comfortable, than existing municipal taxi and livery (“black”) car services. Regardless of service level, the vehicles involved are owned and operated by drivers the company has gone to great lengths to characterize not as employees (with all that would imply for liability insurance, wages, and the provision of employee benefits) but as independent contractors.

Uber is classified under California law as a “network transportation company,” and while the dry legal taxonomy is technically accurate, it masks what is truly radical about the enterprise. Seen clearly, Uber is little more than a brand coupled to a spatialized resource-allocation algorithm, with a rudimentary reputation mechanic running on top. The company owns no fleet, employs relatively few staff directly, and — as we shall see — may not even maintain public offices in the commonly-understood sense of that term.

What distinguishes it from would-be competitors like Hailo and Lyft isn’t so much any particular aspect of its organization or technical functionality, but its stance. Uber comes with an overt ideology. (Even if you somehow remained unaware of CEO Kalanick’s libertarian politics, or his fondness for the work of Ayn Rand — both of which have been widely reported — the nature of that ideology might still readily be inferred from his company’s very name.) Despite a tagline positioning itself as “Everyone’s Private Driver,” Uber has never for a moment pretended to universality. Just the opposite: every aspect of the marketing and user experience announces that this a service consciously designed for the needs, tastes, preferences and status anxieties of a very specific market segment, the aspirant global elite.

Uber makes no apologies about its policy of adaptive surge pricing, in which fare multipliers of up to 8X are applied during periods of particularly heavy demand. But at an average fare of around twenty US dollars, a single Uber ride can still be justified by most members of its target audience as an “affordable luxury” — all the more so when enjoyed as an occasional rather than a daily habit. Availing oneself of this luxury, and being seen to do so, is self-evidently appealing to a wide swath of people living in densely built-up places around the world — necessarily including among their number a great many who would likely be appalled by Kalanick’s politics, were they ever unambiguously forced to consider them.

With Uber, Kalanick has made it clear that a service founded on a relatively high technological base of ubiquitous smartphones, sophisticated digital cartography and civilian GPS can be wildly successful when it is wrapped in the language not of technology itself, but of comfort and convenience. So enticing, indeed, is this combination that hundreds of thousands of users are willing to swallow not merely the technologically complex but the politically unsavory when sugarcoated in this way. While this will likely strike most observers as rather obvious, it is an insight that has thus far eluded other actors with a rhetorical or material stake in the development of a heavily technologized urbanity.

This state of affairs, however, is unlikely to last forever. Other interested parties will surely note Uber’s success, draw their own conclusions from it, and attempt to apply whatever lessons they derive to the marketing of their own products and services. If Uber is a confession that the “smart city” is a place we already live in, then, it is also a cautionary case study in the kinds of values we can expect such a city to uphold in its everyday operation — some merely strongly implicit, others right out there in the open. Just what are they?

Those who can afford to pay more deserve to be treated better.
Uber’s proposition to its users collapses any distinction between having and deserving; quite simply, its message is that if you can afford to be treated better than others, you’re entitled to be treated better than others.

This is certainly one of the logics of resource allocation available to it in the late-capitalist marketplace; as Harvard’s Michael Sandel observes, in his 2012 What Money Can’t Buy, this particular logic is increasingly filtering into questions traditionally decided by different principles, such as the (at least superficially egalitarian) rule of first-come/first-served. And it is not, after all, very different from the extant market segmentation dividing public transit from taxi or livery-car service: money to spend has always bought the citydweller in motion a certain degree of privacy and flexibility in routing and schedule. What specifically distinguishes Uber from previous quasi-private mobility offerings, though, and takes it into a kind of libertarian hyperdrive, is its refusal to submit to regulation, carry appropriate insurance, provide for the workers on whom it depends, or in any way allow the broader public to share in a set of benefits distributed all but exclusively between the rider and the company. (Driver comments make it clear that it is possible to make decent money as an Uber driver, but only with the most exceptional hustle; the vigorish assessed is significant, and monthly payments on the luxury vehicles the company requires its drivers to own saddle them with an onerous, persistent burden.)

Uber’s “disruptive” business model forthrightly treats the costs of on-demand, point-to-point mobility as externalities to be borne by anonymous, deprecated others, and this is a strong part of what makes it so corrosive of the public trust. This becomes most acutely evident when Uber drivers are involved in fatal accidents during periods when they do not happen to be carrying passengers, as was the case when driver Syed Muzzafar struck and killed six-year-old Sofia Liu in San Francisco, on the last day of 2013. (Muzzafar’s Uber app was open and running at the time he hit Liu and her family, indicating that he was cruising for fares, but the company refuses to accept any liability for the accident.)

That “better” amounts to a bland generic luxury.
Uber’s conception of user comfort pivots largely on predictability and familiarity. Rather than asking riders to contend with the particularities and idiosyncrasies of local mobility culture, or any of the various factors that distinguish a New York City taxi cab from one in London or Delhi or Beijing, the Uber fleet offers its users a mobile extension of international hospitality nonplace: a single distributed site where globalized norms of blandly aspirational luxury are reinforced.

The suggestions Uber drivers leave for one another on online discussion sites are revealing in this regard. Those who wish to receive high ratings from their passengers are advised to ensure that their vehicles are well-equipped with amenities (mints, bottled water, WiFi connectivity), and remain silent unless spoken to. The all-but-explicit aim is to render the back of an Uber S-Class or 7 Series experientially continuous with the airport lounges, high-end hotels and showplace restaurants of the business-centric generic city hypostatized by Rem Koolhaas in his 1994 article of the same name.

Interpersonal exchanges are more appropriately mediated by algorithms than by one’s own competence.
This conception of good experience is not the only thing suggesting that Uber, its ridership or both are somewhat afraid of actual, unfiltered urbanity. Among the most vexing challenges residents and other users of any large urban place ever confront is that of trust: absent familiarity, or the prospect of developing it over a pattern of repeated interactions, how are people placed (however temporarily) in a position of vulnerability expected to determine who is reliable?

Like other contemporary services, Uber outsources judgments of this type to a trust mechanic: at the conclusion of every trip, passengers are asked to explicitly rate their driver. These ratings are averaged into a score that is made visible to users in the application interface: “John (4.9 stars) will pick you up in 2 minutes.” The implicit belief is that reputation can be quantified and distilled to a single salient metric, and that this metric can be acted upon objectively.

Drivers are, essentially, graded on a curve: their rolling tally, aggregated over the previous 500 passenger engagements, must remain above average not in absolute terms, but against the competitive set. Drivers whose scores drop beneath this threshold may not receive ride requests, and it therefore functions as an effective disciplinary mechanism. Judging from conversations among drivers, further, the criteria on which this all-important performance metric is assessed are subjective and highly variable, meaning that the driver has no choice but to model what they believe riders are looking for in the proverbial “good driver,” internalize that model and adjust their behavior accordingly.

What riders are not told by Uber — though, in this age of ubiquitous peer-to- peer media, it is becoming evident to many that this has in fact been the case for some time — is that they too are rated by drivers, on a similar five-point scale. This rating, too, is not without consequence. Drivers have a certain degree of discretion in choosing to accept or deny ride requests, and to judge from publicly-accessible online conversations, many simply refuse to pick up riders with scores below a certain threshold, typically in the high 3’s. This is strongly reminiscent of the process that I have elsewhere called “differential permissioning,” in which physical access to everyday spaces and functions becomes ever-more widely apportioned on the basis of such computational scores, by direct analogy with the access control paradigm prevalent in the information security community. Such determinations are opaque to those affected, while those denied access are offered few or no effective means of recourse. For prospective Uber patrons, differential permissioning means that they can be blackballed, and never know why.

Uber certainly has this feature in comment with algorithmic reputation-scoring services like Klout. But all such measures stumble in their bizarre insistence that trust can be distilled to a unitary value. This belies the common-sense understanding that reputation is a contingent and relational thing — that actions a given audience may regard as markers of reliability are unlikely to read that way to all potential audiences. More broadly, it also means that Uber constructs the development of trust between driver and passenger as a circumstance in which algorithmic determinations should supplant rather than rely upon (let alone strengthen) our existing competences for situational awareness, negotiation and the detection of verbal and nonverbal social cues.

Interestingly, despite its deployment of mechanisms intended to assess driver and passenger reliability, the company goes to unusual lengths to prevent itself from being brought to accountability. Following the December 2014 Delhi rape incident, police investigators were stunned to realize that while Uber had been operating in India for some time, neither the .in website nor any other document they had access to listed a local office. They were forced to register for the app themselves (as well as download a third-party payment application) simply so they could hire an Uber car and have the driver bring them to the place where he believed his employers worked. Here we see William Gibson’s science-fictional characterization of 21st-century enterprise (“small, fast, ruthless. An atavism…all edge”) brought to pungent life.

Private enterprise should be valorized over public service provision on principle, even when public alternatives would afford comparable levels of service.
Our dissection of Uber makes it clear that, in schematic, the company offers
nothing that a transit authority like Transport for London could not in principle furnish its riders. Consider that TfL already has everything it would need to offer not merely a comparable, but a better and more equitable, service: operational control over London’s fleet of black cabs, a legendarily skilled and knowledgeable driver cohort, the regulatory ability to determine tariffs, and a set of existing application programming interfaces giving it the necessary access to data. Indeed, coupling an on-demand service directly to its standing public transit capacity (at route termini, for example, or in neighborhoods of poor network coverage) would extend its reach considerably, and multiply the value of its existing assets. Even after accounting for operating costs Uber is unwilling to bear, the return to the public coffers could be substantial. [UPDATE 29 August 2015: Something very much like this now appears to be happening in New York City.]

Like other transit authorities of its scale, TfL certainly has the sophistication to perform such an analysis. But the neoliberal values on which Uber thrives, and the concomitant assumption that public transport is best provisioned on a privatized, for-profit basis, have become so deeply embedded into the discourse of urban governance just about everywhere that no such initiative is ever proposed or considered. The implication is that the smart city is a place where callow, “disruptive” services with poor long-term prospects for collective benefit are allowed to displace the public accommodations previous generations of citydwellers would have demanded as a matter of course and of right.

Quite simply, the city is smaller for people who have access to Uber. The advent of near-effortless, on-demand, point-to-point personal mobility has given them a tesseract with which the occasionally unwieldy envelope of urban space-time can be folded down to something more readily manageable. It’s trivially easy to understand the appeal of this — especially when the night is dark, the bus shelter is cold, the neighborhood is remote, and one is alone.

But as should be clear by now, this power to fold space and time comes at a terrible cost. The four values enumerated above make Uber a prime generator of the patterns of spatialized injustice Stephen Graham has called “software-sorted geographies,” although it does so in a way unencompassed by Graham’s original account. Its ordinary operation injects the urban terrain with a mobile and distributed layer of invidious privilege, a hypersite where practices and values deeply inimical to any meaningful conception of the common wealth are continuously reproduced.

More insidiously yet, these become laminated into journey-planning and other services when they position Uber alongside other options available to the commuter, as simply another tab or item in a pull-down menu. Ethical questions are legislated at the level of interface design, at the hands of engineers and designers so immersed in the privileges of youth and relative wealth, and so inculcated with the values prevalent in their own industry, that they may well not perceive anything about Uber to be objectionable in the slightest. (Notable in this regard are Google Maps and Citymapper, both of which now integrate Uber as a modal option alongside public transit and taxis, and Apple’s App Store, which lists the Uber app as an “Essential.”) Consciously or not, though, every such integration acts to normalize the Randian solipsism, the fratboy misogyny, and the sneering disdain for the very notion of a public good that saturates Uber’s presentation of its identity.

Where innovations in personal mobility could just as easily be designed to extend the right to the city, and to conceive of on-demand access to all points in the urbanized field as a public utility, Uber acts to reinscribe and to actually strengthen existing inequities of access. It is an engine consciously, delicately and expertly tuned to socialize risk and privatize gain. In furtherance of the convenience of a few, it sheds risk on its drivers, its passengers, and the communities within which it operates. If in any way this offering is a harbinger of the network-mediated services we can expect to contend with in the city to come, I believe we are justified in harboring the gravest concern — and, further, in doing whatever we can to render the act of choosing to book a ride with Uber a social faux pas of Google Glass proportions.

And this is only to consider what is operating in the proposition offered by a single provider of networked mobility services. If there is a distinct set of values bound up in Uber, it is unmistakably enmeshed within the broader ideological commitments all but universally upheld in the conception of the smart city, wherever on Earth the deployment of this particular ensemble of technologies has been proposed. Chief among these are the reduction or elimination of taxes, tariffs, and duties; the concomitant recourse to corporate sponsorship (or outright privatization) of essential municipal services; the deregulation of activity between private actors; and the prioritization of other policies primarily oriented to the needs of classes and sectors within society that benefit from frictionless global trade.

A judicious onlooker might of course wonder what anything on this laundry list has to do with the attributes or capabilities of networked digital systems, but that is precisely the point. As articulated on terrain from Dholera to Rio de Janeiro to New York, we can understand the ostensibly utopian smart city as nothing more than the information-technological aspect of a globally triumphant but still-ravenous neoliberalism — a mask this ideology wears when it wishes to dissemble its true nature and appeal to audiences beyond its existing core of convinced adherents.

Dissecting Uber may help clarify the implications of this turn for those whose life chances are and will continue to be affected by it, but it is the merest start. There remain arrayed before the public for its consideration a very great number of other propositions that belong to the latter two of the smart city’s three orders of visibility, from security systems equipped with facial-recognition capability to networked thermostats to wearable devices aimed at nothing less than quantification of the self. It is these systems in which even the clearest ideological commitments are most likely to be screened or obscured, whether by the seemingly ordinary nature of the product or service or by the very complexity of the distributed technical architecture that underwrites it. Given what is at stake, it’s therefore essential that we subject all such propositions to the most sustained, detailed, and knowledgeable scrutiny before embracing them.

Biddle, Sam. “Uber used private location data for party amusement,” Valleywag, 30 September 2014. Retrieved from

Bradshaw, Tim. “Uber valued at $40bn in latest funding round,” Financial Times, 4 December 2014. Retrieved from

Carr, Paul. “Travis shrugged: The creepy, dangerous ideology behind Silicon Valley’s cult of disruption,” Pando Daily, 24 October 2012. Retrieved from

Constine, Josh. “Uber’s denial of liability in girl’s death raises accident accountability questions,” TechCrunch, 2 January 2014. Retrieved from

Fink, Erica. “Uber’s dirty tricks quantified: Rival counts 5,560 canceled rides,” CNN Money, 12 August 2014. Retrieved from

Gibson, William. “New Rose Hotel” in Burning Chrome, Ace Books, New York, 1986.

Graham, Stephen. “Software-Sorted Geography,” Progress in Human Geography, October 2005.

Greenfield, Adam. “Against the smart city,” Do projects, New York, 2013.

Greenfield, Adam. Everyware, New Riders Press, Berkeley CA, 2006.

Hempel, Jessi. “Why the surge-pricing fiasco is great for Uber,” Fortune, 30 December 2013. Retrieved from

Huet, Ellen. “Rideshare drivers still cornered into insurance secrecy,” Forbes, 18 December 2014. Retrieved from

Koolhaas, Rem. “The Generic City” in S, M, L, XL, The Monacelli Press, New York, 1994.

Rawlinson, Kevin. “Uber service ‘banned’ in Germany by Frankfurt court,” BBC News, 2 September 2014. Retrieved from

Reilly, Claire. “Uber reaches 4x surge pricing as Sydney faces hostage lockdown,” CNet News, 15 December 2014. Retrieved from

Said, Carolyn. “Leaked transcript shows Geico’s stance against Uber, Lyft,” SFGate, 23 November 2014. Retrieved from

Sandel, Michael. What Money Can’t Buy.

Sharma, Aman. “Delhi government bans Uber, says it is misleading customers,” The Times of India, 8 December 2014. Retrieved from

Tran, Mark. “Taxi drivers in European capitals strike over Uber – as it happened,” The Guardian, 11 June 2014. Retrieved from

Weighing the pros and cons of driverless cars, in context

Consider the driverless car, as currently envisioned by Google.

That I can tell, anyway, most discussion of its prospects, whether breathlessly anticipatory or frankly horrendified, is content to weigh it more or less as given. But as I’m always harping on about, I just don’t believe we usefully understand any technology in the abstract, as it sits on a smoothly-paved pad in placid Mountain View. To garner even a first-pass appreciation for the contours of its eventual place in our lives, we have to consider what it would work like, and how people would experience it, in a specified actual context. And so here — as just such a first pass, at least — I try to imagine what would happen if autonomous vehicles like those demo’ed by Google were deployed as a service in the place I remain most familiar with, New York City.

The most likely near-term scenario is that such vehicles would be constructed as a fleet of automated taxicabs, not the more radical and frankly more interesting possibility that the service embracing them would be designed to afford truly public transit. The truth of the matter is that the arrival of the technological capability bound up in these vehicles begins to upend these standing categories…but the world can only accommodate so much novelty at once. The vehicle itself is only one component of an distributed actor-network dedicated to the accomplishment of mobility; when the autonomous vehicle begins to supplant the conventional taxi, that whole network has to restabilize around both the vehicle’s own capabilities and the ways in which those capabilities couple with other, existing actors.

In this case, that means actors like the Taxi and Limousine Commission. Enabling legislation, a body of suitable regulation, a controlling legal authority, the agreement on procedures for assessing liability to calibrate the furnishment of insurance: all of these things will need to be decided upon before any such thing as the automation of surface traffic in New York City can happen. And these provisions have a conservative effect. During the elapse of some arbitrary transitional period, anyway, they’ll tend to drag this theoretically disruptive actor back toward the categories we’re familiar with, the modes in which we’re used to the world working. That period may last months or it may last decades; there’s just no way of knowing ahead of time. But during this interregnum, we’ll approach the new thing through interfaces, metaphors and other linkages we’re already used to.

Automated taxis, as envisioned by designer Petr Kubik
Automated taxis, as envisioned by designer Petr Kubik.

So. What can we reasonably assert of a driverless car on the Google model, when such a thing is deployed on the streets and known to its riders as a taxi?

On the plus side of the ledger:
– Black men would finally be able to hail a cab in New York City;
– So would people who use wheelchairs, folks carrying bulky packages, and others habitually and summarily bypassed by drivers;
– Sexual harassment of women riding alone would instantly cease to be an issue;
– You’d never have a driver slow as if to pick you up, roll down the window to inquire as to your destination, and only then decide it wasn’t somewhere they felt like taking you. (Yes, this is against the law, but any New Yorker will tell you it happens every damn day of the week);
– Similarly, if you happen to need a cab at 4:30, you’ll be able to catch one — getting stuck in the trenches of shift change would be a thing of the past;
– The eerily smooth ride of continuous algorithmic control will replace the lurching stop-and-go style endemic to the last few generations of NYC drivers, with everything that implies for both fuel efficiency and your ability to keep your lunch down.

These are all very good things, and they’d all be true no matter how banjaxed the service-design implementation turns out to be. (As, let’s face it, it would be: remember that we’re talking about Google here.) But as I’m fond of pointing out, none of these very good things can be had without cost. What does the flipside of the equation look like?

– Most obviously, a full-fleet replacement would immediately zero out some 50,000 jobs — mostly jobs held by immigrants, in an economy with few other decent prospects for their employment. Let’s be clear that these, while not great jobs (shitty hours, no benefits, physical discomfort, occasionally abusive customers), generate a net revenue that averages somewhere around $23/hour, and this at a time when the New York State minimum wage stands at $8/hour. These are jobs that tie families and entire communities together;
– The wholesale replacement of these drivers would eliminate one of the very few remaining contexts in which wealthy New Yorkers encounter recent immigrants and their culture at all;
– Though this is admittedly less of an issue in Manhattan, it does eliminate at least some opportunity for drivers to develop and demonstrate mastery and urban savoir faire;
– It would give Google, an advertising broker, unparalleled insight into the comings and goings of a relatively wealthy cohort of riders, and in general a dataset of enormous and irreplicable value;
– Finally, by displacing alternatives, and over the long term undermining the ecosystem of technical capabilities, human competences and other provisions that undergirds contemporary taxi service, the autonomous taxi would in time tend to bring into being and stabilize the conditions for its own perpetuation, to the exclusion of other ways of doing things that might ultimately be more productive. Of course, you could say precisely the same thing about contemporary taxis — that’s kind of the point I’m trying to make. But we should see these dynamics with clear eyes before jumping in, no?

I’m sure, quite sure, that there are weighting factors I’ve overlooked, perhaps even obvious and significant ones. This isn’t the whole story, or anything like it. There is one broadly observable trend I can’t help but noticing, however, in all the above: the benefits we stand to derive from deploying autonomous vehicles on our streets in this way are all felt in the near or even immediate term, while the costs all tend to be circumstances that only tell in the fullness of time. And we haven’t as a species historically tended to do very well with this pattern, the prime example being our experience of the automobile itself. It’s something to keep in mind.

There’s also something to be gleaned from Google’s decision to throw in their lot with Uber — an organization explicitly oriented toward the demands of the wealthy and boundlessly, even gleefully, corrosive of the public trust. And that is that you shouldn’t set your hopes on any mobility service Google builds on their autonomous-vehicle technology ever being positioned as the public accommodation or public utility it certainly could be. The decision to more tightly integrate Uber into their suite of wayfinding and journey-planning services makes it clear that for Google, the prerogative to maximize return on investment for a very few will always outweigh the interests of the communities in which they operate. And that, too, is something to keep in mind, anytime you hear someone touting all of the ways in which the clean, effortless autotaxi stands to resculpt the city.

Beacons, marketing and the neoliberal logic of space, or: The Engelbart overshoot

If you’ve been reading this blog for any particular length of time, or have tripped across my writing on the Urbanscale site or elsewhere, you’ve probably noticed that I generally insist on discussing the ostensible benefits of urban technology at an unusually granular level. (In fact, I did this just yesterday, in my responses to questions put to me by Korea’s architectural magazine SPACE.) I’ll want to talk about specific locales, devices, instances and deployments, that is, rather than immediately hopping on board with the wide-eyed enthusiasm for generic technical “innovation” in cities that seems near-universal at our moment in history.

My point in doing so is that we can’t really fairly assess a value proposition, or understand the precise nature of the trade-offs bound up in a given deployment of technology, until we see what people make of it in the wild, in a specific locale. The canonical example of the perils that attend the overly generic consideration of a technology is bus rapid transit, or BRT, which works very, very well indeed on sociophysical terrain that strongly resembles its original home of Curitiba, and much less so in low-density environments like Johannesburg, or in places where, for whatever reason, access to the right-of-way can’t be controlled, notably Delhi and New York City. BRT was sold to these latter municipalities as a panacea for problems of urban mobility, without reference to all of the spatial, social, regulatory, pricing-model and service-design elements that had to be brought into balance before anything like success could be declared, and it shows. (Boy howdy, does it show. Have you ridden the New York City MTA’s half-assed instantiation of BRT lately?)

And if anything, information technology is even more sensitively dependent on factors like these. The choice of one touchscreen technology (form factor, operating system, service provider, register of language…) over another very often turns out to determine the success or failure of a given proposition.

But despite all this, sometimes it is possible for the careful observer to suss out the likely future contours of a technology’s adoption, based on a more general appreciation of its nature. And that’s why I want to take a little time today to discuss with you my thinking around the emergent class of low-power, low-range transmitters known as “beacons.”

Classically, of course, a “beacon” was a visually prominent effect of some sort, designed to notify or warn those encountering it of some otherwise indistinct condition or feature in the landscape. And perhaps as originally envisioned, this class of transmitters genuinely was supposed to be what it said on the tin: a simple way for relatively low-powered devices to find and lock onto one another, amid the fog and unpredictable dynamism of the everyday.

This is not a particularly new idea; as long ago as 2005, I’d proposed on my old v-2 site that networked objects would need some lightweight, low-cost way of radiating information about their presence and capabilities to other things (and by extension, people) in the near neighborhood — the foundation of what, at that time, I thought of as a “universal service-discovery layer” draped over the world. And of course I was nowhere near the first to have proposed something along these lines; I myself had been inspired to think more deeply about things talking to each other from a sideways reading of a throw-away bit of cleverness in Bruce Sterling’s 1998 novel Distraction, and it’s fair to say that the idea of things automatically broadcasting their identity to other things had been in the air for quite a few years before that.

But in evolving commercial parlance, beacons are nothing of the sort, really. A contemporary beacon (like these ugly and rather hostile-looking blebs, sold by Estimote) is primarily designed to capture information, not to convey it — and such information as it does convey outward is disproportionately intended to benefit the sender over the recipient. So my first objection to beacon technology is that this very framing is in itself mendacious, dishonest and misleading. (You know you’re in trouble when the very name of something is a lie.)

As things stand now, beacons are intended for one purpose, and one purpose alone: to capture and monetize your behavior. As with the so-called Internet of Things more broadly, there simply aren’t any particularly convincing or compelling use cases for the technology that aren’t about driving needless consumption; almost without exception, those that are even partially robust have to do with closing a commercial transaction. Both the language of beacon technology and the framework of assumptions it grows out of are airlessly, claustrophobically hegemonic, and this thinking is all over their sites: vendors urge you to deploy these “media-rich banner ads for the physical world” in “any physical place, such as your retail store,” to “drive engagement,” “cross-sell and up-sell” and eventually “convert” passersby to purchasers. Even beacon advocates have a hard time coming up with any more than half-hearted art projects by way of uses for the technology that are not founded in the desire to relieve some passing mark of the contents of their wallet, reliably, predictably and on an ongoing basis.

And even those scenarios of use which appear at first blush to be founded in blamelessly humanitarian ends, when subjected to trial by ordeal ultimately turn out to embrace the shabbiest neoliberal reasoning. Cheaper to spackle a subway station with networked microlocation transponders, goes the thinking, than to actually hire and train the (unpredictable, and damnably needy) human beings that might help riders navigate the corridors and interchange nodes. Even if the devices don’t actually turn out to work all that reliably in the fullness of time, or impose a starkly higher TCO than initially estimated, there will be a concrete deployment that someone can point to as an accomplishment, a ticked-off achievement and a justification for renewed budgetary allocation or re-election.

Finally, I find it noteworthy that the beacon cost-benefit proposition can only subsist when it is accomplished stealthily, and when it is presented to citizens forthrightly and transparently, it is just as forthrightly rejected. Perhaps it’s a temporary blip of post-Snowden reticence, but my sense is that most of us have become chary of bundling too many performative dimensions of our identity onto our converged devices at once, and not at all without reason. (Ultimately, I diagnose similar reasons underneath the failure to date of digital wallets and similar device-based payment solutions to gain any market traction whatsoever, though there are other questions at play there as well.)

Beyond and back

The interest in beacons strikes me as being symptomatic of something deeper and more troubling in the culture of technology, something I think of as “the Engelbart overshoot.”

There was a powerful dream that sustained (and not incidentally, justified) half a century’s inquiry into the possibilities of information technology, from Vannevar Bush to Doug Engelbart straight through to Mark Weiser. This was the dream of augmenting the individual human being with instantaneous access to all knowledge, from wherever in the world he or she happened to be standing at any given moment. As toweringly, preposterously ambitious as that goal seems when stated so baldly, it’s hard to conclude anything but that we actually did achieve that dream some time ago, at least as a robust technical proof of concept.

We achieved that dream, and immediately set about betraying it. We betrayed it by shrouding the knowledge it was founded on in bullshit IP law, and by insisting that every interaction with it be pushed through some set of mostly invidious business logic. We betrayed it by building our otherwise astoundingly liberatory propositions around walled gardens and proprietary standards, by putting the prerogatives of rent-seeking ahead of any move to fertilize and renew the commons, and by tolerating the infestation of our informational ecology with vile, value-destroying parasites. These days technical innovators seem more likely to be lauded for devising new ways to harness and exploit people’s life energy for private gain than for the inverse.

In fact, you and I now draw breath in a post-utopian world — a world where the tide of technical idealism has long receded from its high-water mark, where it’s a matter of course to suggest that we must attach (someone’s) networked sensors to our bodies in order to know them, and where, rather astonishingly, it is possible for an intelligent person to argue that spamming the globe with such devices is somehow a precondition of “reclaim[ing our] environment as a place of sociability and creativity.” And this is the world in which beacons and the cause of advocacy for them arise.

There’s very little meaningful for this technology to do — no specifiable aim or goal that genuinely seems to require its deployment, which could not be achieved as or more readily in some other way. As presently constituted, anyway, it doesn’t serve the great dream of aiding us in our lifelong effort to make sense of the endlessly confounding and occasionally dangerous world. It furthers only the puniest and most shaming of ambitions. To the talented, technically capable folks working so hard to build out the beacon world, I ask: Is this really what you want to spend any part of your only life on Earth working to develop? To those advocating this turn, I ask: Can’t you think of any way of relating to people more interesting and productive than trying to sell them something they neither want nor need, and most likely cannot genuinely afford?

It doesn’t take too concerted an intellectual effort to understand what’s really going on with beacons — as a matter of fact, as we’ve seen, most people evidently seem to understand the situation perfectly well already. But I don’t hold out too much hope of getting any of the truly convinced to see the light on this question; we all know how very difficult it can be to get people to understand something when their salary (mortgage payments/kids’ private-school tuition/equity stake/deal flow) depends on them not understanding it. If you ask me, though, we were meant for better things than this.

“Against the smart city” teaser

The following is section 4 of “Against the smart city,” the first part of The City Is Here For You To Use. Our Do projects will be publishing “Against the smart city” in stand-alone POD pamphlet and Kindle editions later on this month.

UPDATE: The Kindle edition is now available for purchase.

4 | The smart city pretends to an objectivity, a unity and a perfect knowledge that are nowhere achievable, even in principle.

Of the major technology vendors working in the field, Siemens makes the strongest and most explicit statement[1] of the philosophical underpinnings on which their (and indeed the entire) smart-city enterprise is founded: “Several decades from now cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service…The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”

We’ve already considered what kind of ideological work is being done when efforts like these are positioned as taking place in some proximate future. The claim of perfect competence Siemens makes for its autonomous IT systems, though, is by far the more important part of the passage. It reflects a clear philosophical position, and while this position is more forthrightly articulated here than it is anywhere else in the smart-city literature, it is without question latent in the work of IBM, Cisco and their peers. Given its foundational importance to the smart-city value proposition, I believe it’s worth unpacking in some detail.

What we encounter in this statement is an unreconstructed logical positivism, which, among other things, implicitly holds that the world is in principle perfectly knowable, its contents enumerable, and their relations capable of being meaningfully encoded in the state of a technical system, without bias or distortion. As applied to the affairs of cities, it is effectively an argument there is one and only one universal and transcendently correct solution to each identified individual or collective human need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something which can be encoded in public policy, again without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)

Every single aspect of this argument is problematic.

Perfectly knowable, without bias or distortion: Collectively, we’ve known since Heisenberg that to observe the behavior of a system is to intervene in it. Even in principle, there is no way to stand outside a system and take a snapshot of it as it existed at time T.

But it’s not as if any of us enjoy the luxury of living in principle. We act in historical space and time, as do the technological systems we devise and enlist as our surrogates and extensions. So when Siemens talks about a city’s autonomous systems acting on “perfect knowledge” of residents’ habits and behaviors, what they are suggesting in the first place is that everything those residents ever do — whether in public, or in spaces and settings formerly thought of as private — can be sensed accurately, raised to the network without loss, and submitted to the consideration of some system capable of interpreting it appropriately. And furthermore, that all of these efforts can somehow, by means unspecified, avoid being skewed by the entropy, error and contingency that mark everything else that transpires inside history.

Some skepticism regarding this scenario would certainly be understandable. It’s hard to see how Siemens, or anybody else, could avoid the slippage that’s bound to occur at every step of this process, even under the most favorable circumstances imaginable.

However thoroughly Siemens may deploy their sensors, to start with, they’ll only ever capture the qualities about the world that are amenable to capture, measure only those quantities that can be measured. Let’s stipulate, for the moment, that these sensing mechanisms somehow operate flawlessly, and in perpetuity. What if information crucial to the formulation of sound civic policy is somehow absent from their soundings, resides in the space between them, or is derived from the interaction between whatever quality of the world we set out to measure and our corporeal experience of it?

Other distortions may creep into the quantification of urban processes. Actors whose performance is subject to measurement may consciously adapt their behavior to produce metrics favorable to them in one way or another. For example, a police officer under pressure to “make quota” may issue citations for infractions she would ordinarily overlook; conversely, her precinct commander, squeezed by City Hall to present the city as an ever-safer haven for investment, may downwardly classify[2] felony assault as a simple misdemeanor. This is the phenomenon known to viewers of The Wire as “juking the stats[3],” and it’s particularly likely to happen when financial or other incentives are contingent on achieving some nominal performance threshold. Nor is it the only factor likely to skew the act of data collection; long, sad experience suggests that the usual array of all-too-human pressures will continue to condition any such effort. (Consider the recent case in which Seoul Metro operators were charged with using CCTV cameras to surreptitiously ogle women passengers[4], rather than scan platforms and cars for criminal activity as intended.)

What about those human behaviors, and they are many, that we may for whatever reason wish to hide, dissemble, disguise, or otherwise prevent being disclosed to the surveillant systems all around us? “Perfect knowledge,” by definition, implies either that no such attempts at obfuscation will be made, or that any and all such attempts will remain fruitless. Neither one of these circumstances sounds very much like any city I’m familiar with, or, for that matter, would want to be.

And what about the question of interpretation? The Siemens scenario amounts to a bizarre compound assertion that each of our acts has a single salient meaning, which is always and invariably straightforwardly self-evident — in fact, so much so that this meaning can be recognized, made sense of and acted upon remotely, by a machinic system, without any possibility of mistaken appraisal.

The most prominent advocates of this approach appear to believe that the contingency of data capture is not an issue, nor is any particular act of interpretation involved in making use of whatever data is retrieved from the world in this way. When discussing their own smart-city venture, senior IBM executives[5] argue, in so many words, that “the data is the data”: transcendent, limpid and uncompromised by human frailty. This mystification of “the data” goes unremarked upon and unchallenged not merely in IBM’s material, but in the overwhelming majority of discussions of the smart city. But different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few meters. Perceptions of risk in a neighborhood can be transformed by altering the taxonomy used to classify reported crimes ever so slightly[6]. And anyone who’s ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey. The fact is that the data is never “just” the data, and to assert otherwise is to lend inherently political and interested decisions regarding the act of data collection an unwonted gloss of neutrality and dispassionate scientific objectivity.

The bold claim of perfect knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it’s astonishing that anyone would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful.

One and only one solution: With their inherent, definitional diversity, layeredness and complexity, we can usefully think of cities as tragic. As individuals and communities, the people who live in them hold to multiple competing and equally valid conceptions of the good, and it’s impossible to fully satisfy all of them at the same time. A wavefront of gentrification can open up exciting new opportunities for young homesteaders, small retailers and craft producers, but tends to displace the very people who’d given a neighborhood its character and identity. An increased police presence on the streets of a district reassures some residents, but makes others uneasy, and puts yet others at definable risk. Even something as seemingly straightforward and honorable as an anticorruption initiative can undo a fabric of relations that offered the otherwise voiceless at least some access to local power. We should know by now that there are and can be no[7] Pareto-optimal solutions for any system as complex as a city.

Arrived at algorithmically: Assume, for the sake of argument, that there could be such a solution, a master formula capable of resolving all resource-allocation conflicts and balancing the needs of all a city’s competing constituencies. It certainly would be convenient if this golden mean could be determined automatically and consistently, via the application of a set procedure — in a word, algorithmically.

In urban planning, the idea that certain kinds of challenges are susceptible to algorithmic resolution has a long pedigree. It’s already present in the Corbusian doctrine that the ideal and correct ratio of spatial provisioning in a city can be calculated from nothing more than an enumeration of the population, it underpins the complex composite indices of Jay Forrester’s 1969 Urban Dynamics[8], and it lay at the heart of the RAND Corporation’s (eventually disastrous) intervention in the management of 1970s New York City[9]. No doubt part of the idea’s appeal to smart-city advocates, too, is the familial resemblance such an algorithm would bear to the formulae by which commercial real-estate developers calculate air rights, the land area that must be reserved for parking in a community of a given size, and so on.

In the right context, at the appropriate scale, such tools are surely useful. But the wholesale surrender of municipal management to an algorithmic toolset — for that is surely what is implied by the word “autonomous” — would seem to repose an undue amount of trust in the party responsible for authoring the algorithm. At least, if the formulae at the heart of the Siemens scenario turn out to be anything at all like the ones used in the current generation of computational models, critical, life-altering decisions will hinge on the interaction of poorly-defined and surprisingly subjective values: a “quality of life” metric, a vague category of “supercreative[10]” occupations, or other idiosyncrasies along these lines. The output generated by such a procedure may turn on half-clever abstractions, in which a complex circumstance resistant to direct measurement is represented by the manipulation of some more easily-determined proxy value: average walking speed stands in for the more inchoate “pace” of urban life, while the number of patent applications constitutes an index of “innovation.”

Even beyond whatever doubts we may harbor as to the ability of algorithms constructed in this way to capture urban dynamics with any sensitivity, the element of the arbitrary we see here should give us pause. Given the significant scope for discretion in defining the variables on which any such thing is founded, we need to understand that the authorship of an algorithm intended to guide the distribution of civic resources is itself an inherently political act. And at least as things stand today, neither in the Siemens material nor anywhere else in the smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.

Encoded in public policy, and applied transparently, dispassionately and in a manner free from politics: A review of the relevant history suggests that policy recommendations derived from computational models are only rarely applied to questions as politically sensitive as resource allocation without some intermediate tuning taking place. Inconvenient results may be suppressed, arbitrarily overridden by more heavily-weighted decision factors, or simply ignored.

The best-documented example of this tendency remains the work of the New York City-RAND Institute, explicitly chartered to implant in the governance of New York City “the kind of streamlined, modern management that Robert McNamara applied in the Pentagon with such success[11]” during his tenure as Secretary of Defense (1961-1968). The statistics-driven approach that McNamara’s Whiz Kids had so famously brought to the prosecution of the war in Vietnam, variously thought of as “systems analysis” or “operations research,” was first applied to New York in a series of studies conducted between 1973 and 1975, in which RAND used FDNY incident response-time data[12] to determine the optimal distribution of fire stations.

Methodological flaws undermined the effort from the outset. RAND, for simplicity’s sake, chose to use the time a company arrived at the scene of a fire as the basis of their model, rather than the time at which that company actually began fighting the fire; somewhat unbelievably, for anyone with the slightest familiarity with New York City, RAND’s analysts then compounded their error by refusing to acknowledge traffic as a factor in response time[13]. Again, we see some easily-measured value used as a proxy for a reality that is harder to quantify, and again we see the distortion of ostensibly neutral results by the choices made by an algorithm’s designers. But the more enduring lesson for proponents of data-driven policy has to do with how the study’s results were applied. Despite the mantle of coolly “objective” scientism that systems analysis preferred to wrap itself in, RAND’s final recommendations bowed to factionalism within the Fire Department, as well as the departmental leadership’s need to placate critical external constituencies; the exercise, in other words, turned out to be nothing if not political.

The consequences of RAND’s intervention were catastrophic. Following their recommendations, fire battalions in some of the most vulnerable sections of the city were decommissioned, while the department opened other stations in low-density, low-threat areas; the spatial distribution of firefighting assets remaining actually prevented resources from being applied where they were most critically needed. Great swaths of the city’s poorest neighborhoods burned to the ground as a direct result — most memorably the South Bronx, but immense tracts of Manhattan and Brooklyn as well. Hundreds of thousands of residents were displaced, many permanently, and the unforgettable images that emerged fueled perceptions of the city’s nigh-apocalyptic unmanageability that impeded its prospects well into the 1980s. Might a less-biased model, or a less politically-skewed application of the extant findings, have produced a more favorable outcome? This obviously remains unknowable…but the human and economic calamity that actually did transpire is a matter of public record.

Examples like this counsel us to be wary of claims that any autonomous system will ever be entrusted with the regulation and control of civic resources — just as we ought to be wary of claims that the application of some single master algorithm could result in an Pareto-efficient distribution of resources, or that the complex urban ecology might be sufficiently characterized in data to permit the effective operation of such an algorithm in the first place. For all of the conceptual flaws we’ve identified in the Siemens proposition, though, it’s the word “goal” that just leaps off the page. In all my thinking about cities, it has frankly never occurred to me to assert that cities have goals. (What is Cleveland’s goal? Karachi’s?) What is being suggested here strikes me as a rather profound misunderstanding of what a city is. Hierarchical organizations can be said to have goals, certainly, but not anything as heterogeneous in composition as a city, and most especially not a city in anything resembling a democratic society.

By failing to account for the situation of technological devices inside historical space and time, the diversity and complexity of the urban ecology, the reality of politics or, most puzzlingly of all, the “normal accidents[14]” all complex systems are subject to, Siemens’ vision of cities perfectly regulated by autonomous smart systems thoroughly disqualifies itself. But it’s in this depiction of a city as an entity with unitary goals that it comes closest to self-parody.

If it seems like breaking a butterfly on a wheel to subject marketing copy to this kind of dissection, I am merely taking Siemens and the other advocates of the smart city at their word, and this is what they (claim to) really believe. When pushed on the question, of course, some individuals working for enterprises at the heart of the smart-city discourse admit that what their employers actually propose to do is distinctly more modest: they simply mean to deploy sensors on municipal infrastructure, and adjust lighting levels, headway or flow rates to accommodate real-time need. If this is the case, perhaps they ought to have a word with their copywriters, who do the endeavor no favors by indulging in the imperial overreach of their rhetoric. As matters now stand, the claim of perfect competence that is implicit in most smart-city promotional language — and thoroughly explicit in the Siemens material — is incommensurate with everything we know about the way technical systems work, as well as the world they work in. The municipal governments that constitute the primary intended audience for materials like these can only be advised, therefore, to approach all such claims with the greatest caution.


[1] Siemens Corporation. “Sustainable Buildings — Networked Technologies: Smart Homes and Cities,” Pictures of the Future, Fall 2008.

[2] For example, in New York City, an anonymous survey of “hundreds of retired high-ranking [NYPD] officials” found that “tremendous pressure to reduce crime, year after year, prompted some supervisors and precinct commanders to distort crime statistics” they submitted to the centralized COMPSTAT system. Chen, David W., “Survey Raises Questions on Data-Driven Policy,” The New York Times, 08 February 2010.

[3] Simon, David, Kia Corthron, Ed Burns and Chris Collins, The Wire, Season 4, Episode 9: “Know Your Place,” first aired 12 November 2006.

[4] Asian Business Daily. “Subway CCTV was used to watch citizens’ bare skin sneakily,” 16 July 2013. (In Korean.)

[5] Fletcher, Jim, IBM Distinguished Engineer, and Guruduth Banavar, Vice President and Chief Technology Officer for Global Public Sector⁠, personal communication, 08 June 2011.

[6] Migurski, Michal. “Visualizing Urban Data,” in Segaran, Toby and Jeff Hammerbacher, Beautiful Data: The Stories Behind Elegant Data Solutions, O’Reilly Media, Sebastopol CA, 2012: pp. 167-182. See also Migurski, Michal. “Oakland Crime Maps X,” tecznotes, 03 March 2008.

[7] See, as well, Sen’s dissection of the inherent conflict between even mildly liberal values and Pareto optimality. Sen, Amartya Kumar. “The impossibility of a Paretian liberal.” Journal of Political Economy Volume 78 Number 1, Jan-Feb 1970.

[8] Forrester, Jay. Urban Dynamics, The MIT Press, Cambridge, MA, 1969.

[9] See Flood, Joe. The Fires: How a Computer Formula Burned Down New York City — And Determined The Future Of American Cities, Riverhead Books, New York, 2010.

[10] See, e.g. Bettencourt, Luís M.A. et al. “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences, Volume 104 Number 17, 24 April 2007, pp. 7301-7306.

[11] Flood, ibid., Chapter Six.

[12] Rider, Kenneth L. “A Parametric Model for the Allocation of Fire Companies,” New York City-RAND Institute report R-1615-NYC/HUD, April 1975; Kolesar, Peter. “A Model for Predicting Average Fire Company Travel Times,” New York City-RAND Institute report R-1624-NYC, June 1975.

[13] See the Amazon interview with Fires author Joe Flood.

[14] Perrow, Charles. Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984.

Stealthy, slippery, crusty, prickly and jittery redux: On design interventions intended to make space inhospitable

From Mitchell Duneier’s Sidewalk, 1999. The context is a discussion of various physical interventions that have been made in the fabric of New York City’s Pennsylvania Station:

On a walk through the station with [director of “homeless outreach” Richard] Rubel and the photographer Ovie Carter one summer day in 1997…I found it essentially bare of unhoused people. I told Rubel of my interest in the station as a place that had once sustained the lives of unhoused people, and asked if he could point out changes that had been made so that it would be less inviting as a habitat where subsistence elements could be found in one place. He pointed out a variety of design elements of the station which had been transformed, helping to illustrate aspects of the physical structure that had formerly enabled it to serve as a habitat.

He took us to a closet near the Seventh Avenue entrance. “We routinely had panhandlers gathering here, and you could see this closet area where that heavy bracket is, that was a niche.”

“What do you mean by ‘a niche’?”

“This spot right over here was where a panhandler would stand. So my philosophy is, you don’t create nooks and corners. You draw people out into the open, so that your police officers and your cameras have a clean line of sight [emphasis added], so people can’t hide either to sleep or to panhandle.”

Next he brought us to a retail operation with a square corner. “Someone here can sleep and be protected by this line of sight. A space like this serves nobody’s purpose [emphasis added]. So if their gate closes, and somebody sleeps on the floor over here, they are lying undetected. So what you try to do is have people construct their building lines straight out, so you have a straight line of sight with no areas that people can hide behind.”

Next he brought us to what he called a “dead area.” “I find this staircase provides limited use to the station. Amtrak does not physically own this lobby area. We own the staircase and the ledge here. One of the problems that we have in the station is a multi-agency situation where people know what the fringe areas are, the gray areas, that are less than policed. So they serve as focal points for the homeless population. We used to see people sleeping on this brick ledge every night. I told them I wanted a barrier that would prevent people from sleeping on both sides of this ledge. This is an example fo turning something around to get the desired effect.”

“Another situation we had was around the fringes of the taxi roadway. We had these niches that were open. The Madison Square Garden customers that come down from the games would look down and see a community of people living there, as well as refuse that they leave behind.” He installed a fencing project to keep the homeless from going behind corners, drawing them out into the open [emphasis added]. “And again,” said Rubel, “the problem has gone away.”

This logic, of course, is immanent in the design of a great deal of contemporary public urban space, but you rarely find it expressed quite as explicitly as it is here. Compare, as well, Jacobs (1961) on the importance to vibrant street life (and particularly of children’s opportunities for play) of an irregular building line at the sidewalk edge.


Get every new post delivered to your Inbox.

Join 1,746 other followers