Consider the driverless car, as currently envisioned by Google.
That I can tell, anyway, most discussion of its prospects, whether breathlessly anticipatory or frankly horrendified, is content to weigh it more or less as given. But as I’m always harping on about, I just don’t believe we usefully understand any technology in the abstract, as it sits on a smoothly-paved pad in placid Mountain View. To garner even a first-pass appreciation for the contours of its eventual place in our lives, we have to consider what it would work like, and how people would experience it, in a specified actual context. And so here — as just such a first pass, at least — I try to imagine what would happen if autonomous vehicles like those demo’ed by Google were deployed as a service in the place I remain most familiar with, New York City.
The most likely near-term scenario is that such vehicles would be constructed as a fleet of automated taxicabs, not the more radical and frankly more interesting possibility that the service embracing them would be designed to afford truly public transit. The truth of the matter is that the arrival of the technological capability bound up in these vehicles begins to upend these standing categories…but the world can only accommodate so much novelty at once. The vehicle itself is only one component of an distributed actor-network dedicated to the accomplishment of mobility; when the autonomous vehicle begins to supplant the conventional taxi, that whole network has to restabilize around both the vehicle’s own capabilities and the ways in which those capabilities couple with other, existing actors.
In this case, that means actors like the Taxi and Limousine Commission. Enabling legislation, a body of suitable regulation, a controlling legal authority, the agreement on procedures for assessing liability to calibrate the furnishment of insurance: all of these things will need to be decided upon before any such thing as the automation of surface traffic in New York City can happen. And these provisions have a conservative effect. During the elapse of some arbitrary transitional period, anyway, they’ll tend to drag this theoretically disruptive actor back toward the categories we’re familiar with, the modes in which we’re used to the world working. That period may last months or it may last decades; there’s just no way of knowing ahead of time. But during this interregnum, we’ll approach the new thing through interfaces, metaphors and other linkages we’re already used to.
Automated taxis, as envisioned by designer Petr Kubik.
So. What can we reasonably assert of a driverless car on the Google model, when such a thing is deployed on the streets and known to its riders as a taxi?
On the plus side of the ledger:
- Black men would finally be able to hail a cab in New York City;
- So would people who use wheelchairs, folks carrying bulky packages, and others habitually and summarily bypassed by drivers;
- Sexual harassment of women riding alone would instantly cease to be an issue;
- You’d never have a driver slow as if to pick you up, roll down the window to inquire as to your destination, and only then decide it wasn’t somewhere they felt like taking you. (Yes, this is against the law, but any New Yorker will tell you it happens every damn day of the week);
- Similarly, if you happen to need a cab at 4:30, you’ll be able to catch one — getting stuck in the trenches of shift change would be a thing of the past;
- The eerily smooth ride of continuous algorithmic control will replace the lurching stop-and-go style endemic to the last few generations of NYC drivers, with everything that implies for both fuel efficiency and your ability to keep your lunch down.
These are all very good things, and they’d all be true no matter how banjaxed the service-design implementation turns out to be. (As, let’s face it, it would be: remember that we’re talking about Google here.) But as I’m fond of pointing out, none of these very good things can be had without cost. What does the flipside of the equation look like?
- Most obviously, a full-fleet replacement would immediately zero out some 50,000 jobs — mostly jobs held by immigrants, in an economy with few other decent prospects for their employment. Let’s be clear that these, while not great jobs (shitty hours, no benefits, physical discomfort, occasionally abusive customers), generate a net revenue that averages somewhere around $23/hour, and this at a time when the New York State minimum wage stands at $8/hour. These are jobs that tie families and entire communities together;
- The wholesale replacement of these drivers would eliminate one of the very few remaining contexts in which wealthy New Yorkers encounter recent immigrants and their culture at all;
- Though this is admittedly less of an issue in Manhattan, it does eliminate at least some opportunity for drivers to develop and demonstrate mastery and urban savoir faire;
- It would give Google, an advertising broker, unparalleled insight into the comings and goings of a relatively wealthy cohort of riders, and in general a dataset of enormous and irreplicable value;
- Finally, by displacing alternatives, and over the long term undermining the ecosystem of technical capabilities, human competences and other provisions that undergirds contemporary taxi service, the autonomous taxi would in time tend to bring into being and stabilize the conditions for its own perpetuation, to the exclusion of other ways of doing things that might ultimately be more productive. Of course, you could say precisely the same thing about contemporary taxis — that’s kind of the point I’m trying to make. But we should see these dynamics with clear eyes before jumping in, no?
I’m sure, quite sure, that there are weighting factors I’ve overlooked, perhaps even obvious and significant ones. This isn’t the whole story, or anything like it. There is one broadly observable trend I can’t help but noticing, however, in all the above: the benefits we stand to derive from deploying autonomous vehicles on our streets in this way are all felt in the near or even immediate term, while the costs all tend to be circumstances that only tell in the fullness of time. And we haven’t as a species historically tended to do very well with this pattern, the prime example being our experience of the automobile itself. It’s something to keep in mind.
There’s also something to be gleaned from Google’s decision to throw in their lot with Uber — an organization explicitly oriented toward the demands of the wealthy and boundlessly, even gleefully, corrosive of the public trust. And that is that you shouldn’t set your hopes on any mobility service Google builds on their autonomous-vehicle technology ever being positioned as the public accommodation or public utility it certainly could be. The decision to more tightly integrate Uber into their suite of wayfinding and journey-planning services makes it clear that for Google, the prerogative to maximize return on investment for a very few will always outweigh the interests of the communities in which they operate. And that, too, is something to keep in mind, anytime you hear someone touting all of the ways in which the clean, effortless autotaxi stands to resculpt the city.
UPDATE: Event confirmed for 14th March, 2014. See the final post.
For the past half-decade or so, in a phenomenon most everyone reading this site is no doubt already intimately acquainted with, data-derived artifacts (dynamic visualizations, digital maps, interactive representations of place-specific information, even static “infographics”) have taken increasing prominence in the visual imaginary of mass culture.
We see such images all the time now: broadly speaking, the visual rhetoric associated with them is the animating stuff of everything from car commercials to the weather forecast. The same rhetoric breathes life into election and sports coverage on television, the title sequences of movies, viral Facebook posts and the interactive features on newspaper sites.
Sometimes — in fact, often — these images are deployed as abstract tokens, empty fetishes of futurity, tech-ness, data-ness, evidence-basedness…ultimately, au-courantness. Just as often, and very problematically, they’re used to “prove” things.
But we’ve also begun to see the first inklings of ways in which such artifacts can be used more interestingly, to open up rather than shut down collective discussion around issues of great popular import — to ask its users to consider how and why the state of affairs represented by a given visualization got to be that way, whether that state of affairs is at all OK with them, and what if anything ought to be done to redress it. And this is whether the topic at hand happens to be land use, urban renewal and gentrification, informal housing, the differential consequences of public and privatized mass transit or expenditures in the criminal justice system.
Very few methods of advocacy can convey the consequences of our collective decisions as viscerally as a soundly-designed visualization. (Similarly, if there’s a better way of helping people imagine the spatial implications of alternative policy directions, strategies, investments and allocations, I haven’t stumbled onto it yet, although that certainly blurs the distinction between representing that which does exist and simulating that which does not.) What would happen if such visualizations were consciously and explicitly used as the ground text and point of departure for a moderated deliberative process? Could democracy be done this way? Could this be done at regular intervals? And how might doing so lead to better outcomes (or simply more buy-in) than existing procedures?
There’s plenty of rough precedent for such a notion, albeit scattered across a few different registers of activity:
- A few savvy journalists are starting to use data-based visualizations and maps as the starting point for their more traditional investigative efforts, and the narratives built on them. Visualizations, in this mode, essentially allow unexpected correlations and fact patterns to rise to the surface of awareness, and suggest what questions it might therefore be fruitful for a reporter to ask.
- SeeClickFix, of course, already allows citizens to levy demands on local government bodies, though it doesn’t provide for the organization of autonomous response to the conditions it documents, and it forthrightly positions the objects it represents as problems rather than matters of concern. More proactive and affirmative in its framing is Change By Us, which does emphasize voluntarism, though still with a sense of supplication to (elected or appointed) representatives in government. (The site answers the question “Who’s listening?” by promising that a “network of city leaders is ready to hear your ideas and provide guidance for your projects.”) In any event, both SeeClickFix and Change By Us focus on highly granular, literally pothole- or at most community-garden-scale issues.
- Storefront Democracy, a student project of Kristin Gräfe and (ex-Urbanscaler) Jeff Kirsch, reimagined the front window of a city councillor’s district office as a site where community sentiment on various questions, expressed as votes, could be visualized. Voting is not quite the same thing as democracy, much less deliberation, but the project began to explore ways in which situated representations might be used to catalyze conversations about matters facing the community.
- There are even full-blown technological platforms that promise to enable robust networked democracy, though for all the technology involved this one at least seems to blow right by the potential of visualized states of affairs to serve as focal points for managed dissensus.
Draw out all of those threads, and what do you wind up with? I’m not at all sure, but the question is certainly provocative enough that I want to explore its implications in further depth and detail. Again, I’m interested in digital cartography and interactive representations of data used as the starting point, rather than the product and culmination, of a decision process. My intention is to disturb these things as settled facts, disinter them from the loam of zeitgeisty but near-meaningless infoporn that furnishes more than one glossy coffee-table book, and activate them instead as situated social objects. I think by now it’s clear that data-driven projects like Digital Matatus can furnish people with practical tools to manage the way things are in the city. But can they usefully catalyze conversation about the way things could (or should) be? And can we somehow bundle information about provenance into every representation of data, allowing users to ask how it was gathered, by whom, using what means and for what notional purpose, so they can arrive at their own determinations of its reliability and relevance? All of that remains to be seen.
If you find yourself nodding at any of this — or, indeed, you think it’s all deeply misguided, but nevertheless worth contesting in person — consider this a heads-up that I’ll be convening a one-day seminar on this and related topics at LSE in mid-March, and am looking for qualified speakers beyond my personal orbit and existing friendship circles. If you’re interested in either attending or speaking, please do email me at your earliest convenience at my first initial dot my last name at lse.ac.uk. Limited travel support is available – I have an event budget that allows me to fly in two to three speakers and put you up in Central London for a night, so if you or someone you know is inclined to present I definitely encourage you to get in touch. And let’s see if together we can’t figure out if there’s a thing here or not.
The following is section 4 of “Against the smart city,” the first part of The City Is Here For You To Use. Our Do projects will be publishing “Against the smart city” in stand-alone POD pamphlet and Kindle editions later on this month.
4 | The smart city pretends to an objectivity, a unity and a perfect knowledge that are nowhere achievable, even in principle.
Of the major technology vendors working in the field, Siemens makes the strongest and most explicit statement of the philosophical underpinnings on which their (and indeed the entire) smart-city enterprise is founded: “Several decades from now cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service…The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”
We’ve already considered what kind of ideological work is being done when efforts like these are positioned as taking place in some proximate future. The claim of perfect competence Siemens makes for its autonomous IT systems, though, is by far the more important part of the passage. It reflects a clear philosophical position, and while this position is more forthrightly articulated here than it is anywhere else in the smart-city literature, it is without question latent in the work of IBM, Cisco and their peers. Given its foundational importance to the smart-city value proposition, I believe it’s worth unpacking in some detail.
What we encounter in this statement is an unreconstructed logical positivism, which, among other things, implicitly holds that the world is in principle perfectly knowable, its contents enumerable, and their relations capable of being meaningfully encoded in the state of a technical system, without bias or distortion. As applied to the affairs of cities, it is effectively an argument there is one and only one universal and transcendently correct solution to each identified individual or collective human need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something which can be encoded in public policy, again without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)
Every single aspect of this argument is problematic.
— Perfectly knowable, without bias or distortion: Collectively, we’ve known since Heisenberg that to observe the behavior of a system is to intervene in it. Even in principle, there is no way to stand outside a system and take a snapshot of it as it existed at time T.
But it’s not as if any of us enjoy the luxury of living in principle. We act in historical space and time, as do the technological systems we devise and enlist as our surrogates and extensions. So when Siemens talks about a city’s autonomous systems acting on “perfect knowledge” of residents’ habits and behaviors, what they are suggesting in the first place is that everything those residents ever do — whether in public, or in spaces and settings formerly thought of as private — can be sensed accurately, raised to the network without loss, and submitted to the consideration of some system capable of interpreting it appropriately. And furthermore, that all of these efforts can somehow, by means unspecified, avoid being skewed by the entropy, error and contingency that mark everything else that transpires inside history.
Some skepticism regarding this scenario would certainly be understandable. It’s hard to see how Siemens, or anybody else, could avoid the slippage that’s bound to occur at every step of this process, even under the most favorable circumstances imaginable.
However thoroughly Siemens may deploy their sensors, to start with, they’ll only ever capture the qualities about the world that are amenable to capture, measure only those quantities that can be measured. Let’s stipulate, for the moment, that these sensing mechanisms somehow operate flawlessly, and in perpetuity. What if information crucial to the formulation of sound civic policy is somehow absent from their soundings, resides in the space between them, or is derived from the interaction between whatever quality of the world we set out to measure and our corporeal experience of it?
Other distortions may creep into the quantification of urban processes. Actors whose performance is subject to measurement may consciously adapt their behavior to produce metrics favorable to them in one way or another. For example, a police officer under pressure to “make quota” may issue citations for infractions she would ordinarily overlook; conversely, her precinct commander, squeezed by City Hall to present the city as an ever-safer haven for investment, may downwardly classify felony assault as a simple misdemeanor. This is the phenomenon known to viewers of The Wire as “juking the stats,” and it’s particularly likely to happen when financial or other incentives are contingent on achieving some nominal performance threshold. Nor is it the only factor likely to skew the act of data collection; long, sad experience suggests that the usual array of all-too-human pressures will continue to condition any such effort. (Consider the recent case in which Seoul Metro operators were charged with using CCTV cameras to surreptitiously ogle women passengers, rather than scan platforms and cars for criminal activity as intended.)
What about those human behaviors, and they are many, that we may for whatever reason wish to hide, dissemble, disguise, or otherwise prevent being disclosed to the surveillant systems all around us? “Perfect knowledge,” by definition, implies either that no such attempts at obfuscation will be made, or that any and all such attempts will remain fruitless. Neither one of these circumstances sounds very much like any city I’m familiar with, or, for that matter, would want to be.
And what about the question of interpretation? The Siemens scenario amounts to a bizarre compound assertion that each of our acts has a single salient meaning, which is always and invariably straightforwardly self-evident — in fact, so much so that this meaning can be recognized, made sense of and acted upon remotely, by a machinic system, without any possibility of mistaken appraisal.
The most prominent advocates of this approach appear to believe that the contingency of data capture is not an issue, nor is any particular act of interpretation involved in making use of whatever data is retrieved from the world in this way. When discussing their own smart-city venture, senior IBM executives argue, in so many words, that “the data is the data”: transcendent, limpid and uncompromised by human frailty. This mystification of “the data” goes unremarked upon and unchallenged not merely in IBM’s material, but in the overwhelming majority of discussions of the smart city. But different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few meters. Perceptions of risk in a neighborhood can be transformed by altering the taxonomy used to classify reported crimes ever so slightly. And anyone who’s ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey. The fact is that the data is never “just” the data, and to assert otherwise is to lend inherently political and interested decisions regarding the act of data collection an unwonted gloss of neutrality and dispassionate scientific objectivity.
The bold claim of perfect knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it’s astonishing that anyone would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful.
— One and only one solution: With their inherent, definitional diversity, layeredness and complexity, we can usefully think of cities as tragic. As individuals and communities, the people who live in them hold to multiple competing and equally valid conceptions of the good, and it’s impossible to fully satisfy all of them at the same time. A wavefront of gentrification can open up exciting new opportunities for young homesteaders, small retailers and craft producers, but tends to displace the very people who’d given a neighborhood its character and identity. An increased police presence on the streets of a district reassures some residents, but makes others uneasy, and puts yet others at definable risk. Even something as seemingly straightforward and honorable as an anticorruption initiative can undo a fabric of relations that offered the otherwise voiceless at least some access to local power. We should know by now that there are and can be no Pareto-optimal solutions for any system as complex as a city.
— Arrived at algorithmically: Assume, for the sake of argument, that there could be such a solution, a master formula capable of resolving all resource-allocation conflicts and balancing the needs of all a city’s competing constituencies. It certainly would be convenient if this golden mean could be determined automatically and consistently, via the application of a set procedure — in a word, algorithmically.
In urban planning, the idea that certain kinds of challenges are susceptible to algorithmic resolution has a long pedigree. It’s already present in the Corbusian doctrine that the ideal and correct ratio of spatial provisioning in a city can be calculated from nothing more than an enumeration of the population, it underpins the complex composite indices of Jay Forrester’s 1969 Urban Dynamics, and it lay at the heart of the RAND Corporation’s (eventually disastrous) intervention in the management of 1970s New York City. No doubt part of the idea’s appeal to smart-city advocates, too, is the familial resemblance such an algorithm would bear to the formulae by which commercial real-estate developers calculate air rights, the land area that must be reserved for parking in a community of a given size, and so on.
In the right context, at the appropriate scale, such tools are surely useful. But the wholesale surrender of municipal management to an algorithmic toolset — for that is surely what is implied by the word “autonomous” — would seem to repose an undue amount of trust in the party responsible for authoring the algorithm. At least, if the formulae at the heart of the Siemens scenario turn out to be anything at all like the ones used in the current generation of computational models, critical, life-altering decisions will hinge on the interaction of poorly-defined and surprisingly subjective values: a “quality of life” metric, a vague category of “supercreative” occupations, or other idiosyncrasies along these lines. The output generated by such a procedure may turn on half-clever abstractions, in which a complex circumstance resistant to direct measurement is represented by the manipulation of some more easily-determined proxy value: average walking speed stands in for the more inchoate “pace” of urban life, while the number of patent applications constitutes an index of “innovation.”
Even beyond whatever doubts we may harbor as to the ability of algorithms constructed in this way to capture urban dynamics with any sensitivity, the element of the arbitrary we see here should give us pause. Given the significant scope for discretion in defining the variables on which any such thing is founded, we need to understand that the authorship of an algorithm intended to guide the distribution of civic resources is itself an inherently political act. And at least as things stand today, neither in the Siemens material nor anywhere else in the smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.
— Encoded in public policy, and applied transparently, dispassionately and in a manner free from politics: A review of the relevant history suggests that policy recommendations derived from computational models are only rarely applied to questions as politically sensitive as resource allocation without some intermediate tuning taking place. Inconvenient results may be suppressed, arbitrarily overridden by more heavily-weighted decision factors, or simply ignored.
The best-documented example of this tendency remains the work of the New York City-RAND Institute, explicitly chartered to implant in the governance of New York City “the kind of streamlined, modern management that Robert McNamara applied in the Pentagon with such success” during his tenure as Secretary of Defense (1961-1968). The statistics-driven approach that McNamara’s Whiz Kids had so famously brought to the prosecution of the war in Vietnam, variously thought of as “systems analysis” or “operations research,” was first applied to New York in a series of studies conducted between 1973 and 1975, in which RAND used FDNY incident response-time data to determine the optimal distribution of fire stations.
Methodological flaws undermined the effort from the outset. RAND, for simplicity’s sake, chose to use the time a company arrived at the scene of a fire as the basis of their model, rather than the time at which that company actually began fighting the fire; somewhat unbelievably, for anyone with the slightest familiarity with New York City, RAND’s analysts then compounded their error by refusing to acknowledge traffic as a factor in response time. Again, we see some easily-measured value used as a proxy for a reality that is harder to quantify, and again we see the distortion of ostensibly neutral results by the choices made by an algorithm’s designers. But the more enduring lesson for proponents of data-driven policy has to do with how the study’s results were applied. Despite the mantle of coolly “objective” scientism that systems analysis preferred to wrap itself in, RAND’s final recommendations bowed to factionalism within the Fire Department, as well as the departmental leadership’s need to placate critical external constituencies; the exercise, in other words, turned out to be nothing if not political.
The consequences of RAND’s intervention were catastrophic. Following their recommendations, fire battalions in some of the most vulnerable sections of the city were decommissioned, while the department opened other stations in low-density, low-threat areas; the spatial distribution of firefighting assets remaining actually prevented resources from being applied where they were most critically needed. Great swaths of the city’s poorest neighborhoods burned to the ground as a direct result — most memorably the South Bronx, but immense tracts of Manhattan and Brooklyn as well. Hundreds of thousands of residents were displaced, many permanently, and the unforgettable images that emerged fueled perceptions of the city’s nigh-apocalyptic unmanageability that impeded its prospects well into the 1980s. Might a less-biased model, or a less politically-skewed application of the extant findings, have produced a more favorable outcome? This obviously remains unknowable…but the human and economic calamity that actually did transpire is a matter of public record.
Examples like this counsel us to be wary of claims that any autonomous system will ever be entrusted with the regulation and control of civic resources — just as we ought to be wary of claims that the application of some single master algorithm could result in an Pareto-efficient distribution of resources, or that the complex urban ecology might be sufficiently characterized in data to permit the effective operation of such an algorithm in the first place. For all of the conceptual flaws we’ve identified in the Siemens proposition, though, it’s the word “goal” that just leaps off the page. In all my thinking about cities, it has frankly never occurred to me to assert that cities have goals. (What is Cleveland’s goal? Karachi’s?) What is being suggested here strikes me as a rather profound misunderstanding of what a city is. Hierarchical organizations can be said to have goals, certainly, but not anything as heterogeneous in composition as a city, and most especially not a city in anything resembling a democratic society.
By failing to account for the situation of technological devices inside historical space and time, the diversity and complexity of the urban ecology, the reality of politics or, most puzzlingly of all, the “normal accidents” all complex systems are subject to, Siemens’ vision of cities perfectly regulated by autonomous smart systems thoroughly disqualifies itself. But it’s in this depiction of a city as an entity with unitary goals that it comes closest to self-parody.
If it seems like breaking a butterfly on a wheel to subject marketing copy to this kind of dissection, I am merely taking Siemens and the other advocates of the smart city at their word, and this is what they (claim to) really believe. When pushed on the question, of course, some individuals working for enterprises at the heart of the smart-city discourse admit that what their employers actually propose to do is distinctly more modest: they simply mean to deploy sensors on municipal infrastructure, and adjust lighting levels, headway or flow rates to accommodate real-time need. If this is the case, perhaps they ought to have a word with their copywriters, who do the endeavor no favors by indulging in the imperial overreach of their rhetoric. As matters now stand, the claim of perfect competence that is implicit in most smart-city promotional language — and thoroughly explicit in the Siemens material — is incommensurate with everything we know about the way technical systems work, as well as the world they work in. The municipal governments that constitute the primary intended audience for materials like these can only be advised, therefore, to approach all such claims with the greatest caution.
 For example, in New York City, an anonymous survey of “hundreds of retired high-ranking [NYPD] officials” found that “tremendous pressure to reduce crime, year after year, prompted some supervisors and precinct commanders to distort crime statistics” they submitted to the centralized COMPSTAT system. Chen, David W., “Survey Raises Questions on Data-Driven Policy,” The New York Times, 08 February 2010.
 Simon, David, Kia Corthron, Ed Burns and Chris Collins, The Wire, Season 4, Episode 9: “Know Your Place,” first aired 12 November 2006.
 Fletcher, Jim, IBM Distinguished Engineer, and Guruduth Banavar, Vice President and Chief Technology Officer for Global Public Sector, personal communication, 08 June 2011.
 Migurski, Michal. “Visualizing Urban Data,” in Segaran, Toby and Jeff Hammerbacher, Beautiful Data: The Stories Behind Elegant Data Solutions, O’Reilly Media, Sebastopol CA, 2012: pp. 167-182. See also Migurski, Michal. “Oakland Crime Maps X,” tecznotes, 03 March 2008.
 See, as well, Sen’s dissection of the inherent conflict between even mildly liberal values and Pareto optimality. Sen, Amartya Kumar. “The impossibility of a Paretian liberal.” Journal of Political Economy Volume 78 Number 1, Jan-Feb 1970.
 Forrester, Jay. Urban Dynamics, The MIT Press, Cambridge, MA, 1969.
 See Flood, Joe. The Fires: How a Computer Formula Burned Down New York City — And Determined The Future Of American Cities, Riverhead Books, New York, 2010.
 See, e.g. Bettencourt, Luís M.A. et al. “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences, Volume 104 Number 17, 24 April 2007, pp. 7301-7306.
 Flood, ibid., Chapter Six.
 Rider, Kenneth L. “A Parametric Model for the Allocation of Fire Companies,” New York City-RAND Institute report R-1615-NYC/HUD, April 1975; Kolesar, Peter. “A Model for Predicting Average Fire Company Travel Times,” New York City-RAND Institute report R-1624-NYC, June 1975.
 Perrow, Charles. Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984.
An article I was commissioned to write for the Touch issue of What’s Next magazine.
What does it mean for a text to be digital?
In principle, it can be replicated in perfect fidelity, and transmitted to an unlimited number of recipients worldwide, at close to zero cost. Powerful analytic tools can be brought to bear on it, and our reading of it. It can be compared against other texts, plumbed for clues as to its provenance and authorship. Each of our acts of engagement with it — whether of acquisition, reading, or annotation — can be shared with our social networks, mobilized as props in an ongoing performance of self. Above all, it becomes (to use the jargon practically unavoidable in any discussion of information technology) “platform-agnostic.” This is to say that it becomes independent, to a very great degree, of the physical medium in which it currently happens to be instantiated.
To varying degrees, these things have been true as long as words have been encoded in ones and zeroes — certainly since 1971, when Project Gutenberg was founded with the intention of digitizing as much of the world’s literature as possible, and making it all available for free. Why is it the case, then, that digital books only seem to have entered our lives in any major way in the last two or three years?
The apparently sudden arrival of the digital text likely owes something to the top-of-mind quality Amazon currently enjoys in its main markets, its name and value proposition as prominent in our awareness as those of the grocery chains, television networks or airlines we patronize — a presence it’s taken the company the better part of the last fifteen years to build up. And it surely has something to do with the widespread popular facility with the tropes and metaphors governing our engagement with digital content of all sorts that has developed over the same period of time, to the point that it’s increasingly hard to meet a grandparent inconversant with downloads, torrents and the virtues of cloud storage.
But the fundamental reason is probably that bit about platform-agnosticism. Anyone so inclined could have “engaged digital text” on a conventional computer at any point in the past forty years. But the act of reading didn’t — and maybe couldn’t — properly come into its own in the digital era until there was a platform for literature as present to the senses as paper itself, something as well-suited to the digital text as the road is to the automobile. I refer, of course, to the networked tablet.
It’s only with the widespread embrace of these devices that digital reading has become ubiquitous. Relatively inexpensive, lightweight and comfortable in the hand, capable of storing thousands of volumes, the merits of the tablet as reading environment may strike us as self-evident. But there’s another factor that underlies its general appeal, and that is the specific phenomenology of the way we manipulate reading material when using one.
We read text on a tablet as pixels, just as we would on any screen. But the ways in which we physically address and move through a body of such pixels have more in common with the behaviors we learned from books in earliest childhood than with anything we picked up in the course of later encounters with computers. This is why the post-PC tablet feels more “intuitive” to us, despite the frank novelty of the gestures we must learn in order to use it, and which no book in the world has ever afforded: the swipe, the drag, the pinch, the tap.
This is the new tactility of reading. But where there are comparatively few semantically-meaningful ways in which the reader’s hand can meet the pages of a material book, the experience of engaging a digital text with the finger is subject to a certain variability. It’s not a boundless freedom — it’s delimited on one side by technological limitations, and on the other by the choices of an interaction designer — but it does require explication.
The first order of variability is the screen medium itself. Each of the major touchscreen technologies available — resistive, capacitive, projective-capacitive, optical — imposes its own constraints on the latency and resolution with which a screen registers a touch, and therefore how long one must place one’s finger against it to turn a page or select a word for definition or a passage for annotation. Reading on a good screen feels effortless, even transparent — but particularly high latency or low resolution can easily disrupt the flow of experience, lifting the reader up and out of the text entirely.
The second is the treatment of type. As critical as it is to the legibility and emotional resonance of a text, and even at the higher resolutions now theroetically available, typography is all but invariably treated as though it had not been refined over five centuries. It still feels like we are many years and product versions away from type on the tablet rendered with the craft and care it deserves.
A third order of variability consists in the separation of content, style and interface elements inherent in contemporary application design. This means that both the meaning of gestural interactions and the treatment of the page itself can vary from environment to environment. Especially given the pressure developers are under to differentiate their products from one another, a tap in the Kindle for iPad application may not mean precisely what a tap in Readmill or Instapaper or Reeder does, or work in at all the same way.
In fact, something as simple and as basic to the act of reading as turning a page is handled differently in all of these contexts.
Originally, of course, the pagination of text was an artifact of necessity, something imposed by running a semantically continuous text across a physically discontinuous quantity of leaves. One might think, therefore, that pagination would be among the first things to go in making the leap to the digital reading environment, but contemporary applications tend to retain it as a skeuomorphism, larding down the interaction with animated page curls and sound effects.
On the Kindle proper, the reader presses a button — one for page forward, another for page back — and the entire screen blanks and refreshes as the new page loads, a transition imposed by the nature of electronic pigment. In the Kindle app, by contrast, the page slides right to left, slipping from future to present to past in a series of discrete taps.
The Instapaper application is, perhaps, truest to the nature of digital copy. It dispenses with all of this, and treats the document as one continuous environment: swipe upward when you’re ready for more. Instapaper is an acknowledgment of the text’s liberation from the constraints of crude matter. Handled this way, there’s no reason a digital text can’t return to something approximating the book’s earliest form, a scroll — in this case, one capable of unspooling without limit.
Finally, we also need to account for what it means to absorb text as a luminous projection. Marshall McLuhan drew a distinction between “light-on” media — that is, those in which content inscribed on a passive surface like paper is illuminated by an external light source — and “light-through” media, like our luminous tablets; per his insistence that medium is coextensive with message, we can assume that the selfsame text consumed in these two ways would be received differently, emotionally every bit as much as cognitively.
As it happens, I have both an actual, e-paper Kindle — digital, but nevertheless light-on — and Kindle applications for the eminently light-through iPhone and iPad. And purely anecdotally, it does seem to be the case that I have an easier time with thornier, weightier reading on the e-paper device. Novels are fine on the iPad, even on my phone…but if I want to wrestle with Graham Harman or Susan Sontag, I reach for the Kindle.
The McLuhanite in me frets that, in embracing the tablet, we inadvertently give up much of our engagement with the text. That beyond sentimentality, there is something about the act of turning a page to punctuate a thought, or the phenomenology of light reflecting off of paper saturated with ink, that conditions the act of reading and makes it what we recognize it to be, at some level beneath the threshold of conscious perception.
Which brings us back, at last, to the printed artifact. We can acknowldge that the networked tablet is a brilliant addition to any reader’s instrumentarium. I’m certain that it increases the number of times and places at which people read, and know from long, intimate and sorrowful personal experience the difference it makes where the portability of entire libraries is concerned. But it’s not quite the same thing as a book or a magazine, and cannot entirely replace them.
Curiously enough, the ambitions to which paper appears to remain best-suited are diametrically opposite:
On the one hand, deep, thoughtful engagement with a body of language, an engagement that fully leverages the craft of bookmaking. In this pursuit, the tablet cannot yet offer nearly the typographic nicety, conscious design for legibility or perceptual richness trivially available from ink on paper — all of the things, in other words, that permit the reader to immerse herself for longer, and with less strain.
But there are also occasions on which surface is all important, where the ostensible content is almost incidental to the qualities of its packaging. Here the texture or other phenomenological qualities of paperstock itself — even its smell — communicate performatively; I think of glossy lifestyle magazines. It’s hard to imagine any tablet or similar device affording these virtues in anything like the near term.
If we understand a book as a container, the precise shape that container takes ought to reflect the nature of its intended contents, and what one proposes to do with them. In acknowledging all the many virtues of networked, digital texts, the texture, tooth and heft of paper will ensure that for at least the contexts I’ve specified here, it remains irreplaceable among all the ways we contain thought as it flows from one human mind to another.
The other day I got mail asking me to contribute to something called usesthis, a site that asks a (frankly fairly homogeneous) selection of creative workers to describe their “setup” — or, in other words, the combination of hardware and software they use on a daily basis — as well as their ideal such arrangement.
I’m always happy enough for a prompt to think in this direction. Although usesthis isn’t really (no pun intended) set up to examine these issues, the whole question of a relationship between creative output and one’s choice of tools is inherently interesting, and is kind of an ongoing preoccupation of mine. As a good connectionist, I’m bound to believe that the artifacts we use mediate or allow us to approach the world in certain specific ways. It follows from this that our selection of one particular tool over another conditions the kind of relations we’re able to enter into — but also, that if the tool is functioning properly, we’re ordinarily unaware of its operations, or of this potential it has to constrain or to open.
If we’re inclined to examine that potential, a rigorous accounting for the intermediators we choose can help us rise up out of the usual, unconscious relation we have to them, and restore the sense of interested inquiry Heidegger (at least) calls presence-at-hand — see Peter Erdélyi’s foreword to The Prince and The Wolf for a particularly pungent version of this.
There’s a lot to say, too, about the determinisms implicit in our selection of specific tools. Very often, particular methods and tools tell in the finished work; it’s not simply, then, that mediating artifacts shape our own ability to act in the world, it’s that they indirectly condition the experience of everyone who comes into contact with the result of that action thereafter. (I’m put in mind of Matthew Fuller and Usman Haque’s prescient comment, in their Situated Technologies pamphlet Urban Versioning System 1.0, that “[i]t is often possible to determine, admittedly more so in a building than in a neighborhood, whether it was designed using AutoCAD, Microstation or Vectorworks.”)
I think it’s relatively easy to see what this means for creative domains like fashion, music, or (as the Fuller/Haque quote implies) architecture. Take the work of Issey Miyake, for example. We can trace the very different ways in which A-POC and the superficially similar Pleats Please line are perceived (by the wearer, by the observer) to specific techniques used in their creation, observe that the material qualities of Pleats Please garments result from polyester fabric being subjected to a particular heat-press process. The way the garment drapes on the body is the direct result of the cloth’s having been shaped by a particular regime of temperature, constraint and pressure — a regime which is in turn brought into local being by a highly particularized set of tools. If you’re interested in understanding why the Pleats Please line tends to appeal to women d’un certain âge, some consideration of how the designer’s understanding of the body is mediated to the body via the deployment of those tools seems indispensable.
Similarly, albeit in a rather different register, it strikes me as being very difficult to discuss Stephen O’Malley‘s work without understanding at least a little something about drop-tuning, .68-gauge strings and the performance envelope of the Sunn Model T amplifier. The unique somatic (SOMAtic?) experience of a SUNN 0))) gig is contingent on these elements — these things — being present, assembled and wielded in a particular way. The affordances and constraints of the objects yoked together in the act of production are directly relevant to the phenomenology of the finished product, even if that “product” is a ten-minute excursion in dronespace.
Casting light on the mesh of associations that bring a Pleats Please garment or a SUNN O))) cut into being does tend to construct creativity a little bit differently than we have traditionally been used to, and I think that’s entirely legitimate. Instead of positioning creation as the act of a lone genius, this way of looking at things suggests that the ability to bring novelty forth is, instead, something that’s smeared out across a network of heterogeneous participants, both human and non-human. This is certainly a decentering of the individual designer, but by no means do I necessarily think of it as an insult. It merely suggests that in those domains where creative production does require the enlistment of such ensembles, exceptional designerly talent ought properly be understood as the specific genius of knowing how to activate, and enable the operations of, such an ensemble — something more akin to orchestration than anything else. In this light, there’s still a great deal to be discovered by poking into the specifics of a given ensemble, and asking how each is brought to bear on the task of creation.
For those of us who work primarily in the medium of words, though, the case isn’t as clearcut.
It’s not as if at least some descriptions of the writer’s toolkit aren’t of interest. Here’s John Brunner, in the final words of his 1968 Stand on Zanzibar:
“This non-novel was brought to you by John Brunner using Spicer Plus Fabric Bond and Commercial Bank papers interleaved with Serillo carbons in a Smith Corona 250 electric typewriter fitted with a Kolok black-record ribbon.”
This was a good McLuhanite, speaking to the formal concerns of the Pop moment. That invocation of brands carries along with it a certain zazzy quality, a sense of liberation experienced in and through commodities I associate with Warren Chalk’s 1964 Living City Survival Kit. (In 1968, as four years earlier, you could still plausibly argue that this was fresh and revelatory.) In this case, as it happens, more specific yet is better. So not just any Smith Corona 250, but John Brunner’s Smith Corona 250. It adds something — something ineffable, and if you know anything about Brunner’s life, ineffably sad — to your appreciation of his oeuvre to read what’s on the Dymo-tape labels he affixed to this daily working tool.
But that has more to do with the object as environment, and only invokes the Smith Corona 250′s material properties and other affordances in the rather attenuated sense that its front affords a surface on which to stick a label. This, of course, is a quality it has in common with a great many other objects that might have occupied the same space on Brunner’s desk. And this begins to get to the crux of what I find a little curious about asking writers about their “setup.”
For me, anyway, focusing on getting things just-so is very little other than a way of delaying the moment I actually settle down to do what I need to. Most of us have some such ritual; Matt Jones memorably describes this process of lining up one’s pencils and notebooks (in preference to actually using the former to write in the latter) as “shaving the yak.” I’ll admit that I also find it a little unseemly, at this point in history, to mention specific named brands and commercial offerings. I’m not Warren Chalk, this isn’t London in 1964, and I’m not performing a swingin’ly post-austerity self through my consumption of Canadian Club and Miles Davis sides. So while, yeah, sure, I use such-and-such a text editor, under a given operating system, running on a particular model of laptop, you won’t learn that much about me — or more to the point, develop any particularly salient insight into the structuration of the argument I’m trying to make — by having these specifics revealed to you. The blunt truth of things is that I would almost certainly be expressing these same sentiments were I working in Microsoft Word on the kind of thoroughly generic, commodity Windows machine the “wrong people” use. From this perspective, the ideal setup of tools is nothing but the one that most readily dissolves into intention. ‘Nuff said, yeah?
I really want to recommend to you this Olivier Thereaux post about broken bus systems and how they might be fixed (and not just because I happen to be taking the MUNI a great deal lately).
What Olivier absolutely nails is the expression of a thought I’ve come back to again and again over the years: that buses and bus networks are by their nature so intimidating to potential users that many people will do just about anything to avoid engaging them. I don’t mind admitting that, depending on the city, the language in use, and my relative level of energy, I’m definitely to be numbered among those people. When buses are effectively the only mode of public transit available, that “just about anything” has occasionally meant laying out ridiculous sums on taxis; more often, it’s resulted in my walking equally absurd distances across cities I barely know.
“Intimidating,” in this context, doesn’t need to mean “terrifying.” It simply implies that the system is just complicated enough, just hard enough to form a mental model of, that the fear of winding up miles away from your intended destination — and possibly with no clear return route, not enough or the right kind of money to pay for a ticket, and no way of asking for clarification — is a real thing. There’s a threshold of comfort involved, and for quite a few categories of users (the young, the old, visitors, immigrants, people with literacy or other impairments) that threshold is set too high. People in this position wind up seeking alternatives…and if practical alternatives do not exist, they do without mobility altogether. They are lost to the city, and the city is lost to them.
The point is more broadly applicable, as well. You know I believe that cities are connection machines, networks of potential subject to Metcalfe’s law. What this means in the abstract is that the total value of an urban network rises as the square of the number of nodes connected to it. What this means in human terms is that a situation in which people are too intimidated to ride the bus (or walk down the street, or leave the apartment) is a sorrow compounded. Again: everything they could offer the network that is the city is lost. And everything we take for granted about the possibilities and promise of great urban places is foreclosed to them.
If you understand things this way, there’s a clear moral imperative inscribed in the design of systems like bus networks and interfaces. Every incremental thing the designer can do to demystify, explain, clarify, and ultimately to lower the threshold at which a potential user decides the risk of climbing aboard is worth taking does a double service — if the Metcalfe’s law construction of things rings true to you, a geometrical service. You are simultaneously improving the conditions under which an individual lives his or her life, and contributing materially to the commonweal. Not bad for a day’s work, if you ask me.
This is personal for me, too, and not just because I’ve occasionally found a route map overwhelming, or decided to walk from Bloomsbury to Dalston instead of chancing the N38 and winding up in, who knows, Calais. What I’ve come to understand, in these last few years of intense concentration on issues of urban design, is that my fascination with cities grows not at all out of ease or comfort with them, but the opposite. I’m an introvert, I’ve never been comfortable approaching strangers with questions, I’m twitchily hyperaware when I’m inconveniencing others (e.g. holding up a bus by asking questions of a driver) and my gifts for language are not great. Above all, I don’t like looking vulnerable and confused any more than anyone does, especially when traveling.
I’ve gotten better on all these counts over the course of my life, but they’re still issues. They can pop to the surface at any time, and, of course, are more likely to do so under conditions of stress. Taken together, what they spell for me is a relatively circumscribed ability to get around and enjoy the things the cities I visit have to offer — relatively, that is, compared to other able-bodied people my own age and with similar levels of privilege. Even this limitation, though, makes me acutely aware of just how difficult getting around can be, how very intimidating it can all seem, and what both people and place stand to lose each and every single time this intimidation is allowed to govern outcomes.
This is why I believe Olivier is absolutely right to focus on design interventions that reduce user stress, and, with all due respect, it’s why I think people like this Speedbird commenter, who understand cities solely as generators of upside potential, are missing something in the empathy department. There are an awful lot of people, everywhere around us, in every city, who have difficulty negotiating the mobility (and other) systems that are supposed to serve their needs. As far as I’m concerned, anyway, it is the proper and maybe even the primary task of the urban systems designer to work with compassion and fearless empathy to address this difficulty. Only by doing so can we extend the very real promise of that upside potential to the greatest possible number of people who would otherwise be denied it, in part or in full, and only by doing so can we realize in turn the full flowering of what they have to offer us.
I’m halfway through Reinventing the Automobile at the moment, which I figure represents the final comprehensive statement of Bill Mitchell’s thinking about urban mobility. As you’d imagine, it’s a passionately-held and painstakingly worked-out vision, basically the summation of all the work anyone with an interest in the space has seen in dribs and drabs over the past few years; it’s clear, for example, that this is what all the work on P.U.M.A. and MIT CityCar was informed by and leading towards.
In outline, Reinventing presents the reader with four essential propositions about the nature of next-generation urban mobility, none of which I necessarily disagree with prima facie:
- That the design principles and assumptions underlying the contemporary automobile — descended as they are, in an almost straight line, from the horseless carriage — are badly obsolete. Specifically, industry conventions regarding a vehicle’s source of motive power, drive and control mechanism, and mode of operation ought to be discarded in their entirety and replaced with ones more appropriate to an age of dense cities, networks, lightweight materials, clean energy and great personal choice.
- That mobility itself is being transformed by information; that extraordinary efficiencies can be realized and tremendous amounts of latent value unlocked if passenger, vehicle and the ground against which both are moving are reconceived as sources and brokers of, and agents upon, real-time data. (Where have I heard that before?)
- That the physical and conceptual infrastructure underlying the generation, storage and distribution of energy is also, and simultaneously, being transformed by information, with implications (again) for the generation of motive power, as well as the provision of environmental, information, communication and entertainment services to vehicles.
- That the above three developments permit (compel?) the wholesale reconceptualization of vehicles as agents in dynamic pricing markets for energy, road-space and parking resources, as well as significantly more conventional vehicle-share schemes.
It’s only that last one that I have any particular quibbles with. Even before accounting for the creepy hints of emergent AI in commodity-trading software I keep bumping up against (and that’s only meant about 75% tongue-in-cheek), I’m not at all convinced that empowering mobile software avatars to bid on road resources in tightly-coupled, nanosecond loops will ever lead to anything but the worst and most literal sort of gridlock.
But that’s not the real problem I have with this body of work. What I really tripped over, as I read, was the titanic dissonance between the MIT vision of urban life and mobility and the one that I was immersed in as I rode the 33 bus across town. It’s a cheap shot, maybe, but I just couldn’t get past the gulf between the actual San Franciscans around me — the enormous, sweet-looking Polynesian kid lost in a half-hour-long spell of autistic head-banging that took him from Oak and Stanyan clear into the Mission; the grizzled but curiously sylphlike person of frankly indeterminate gender, stepping from the bus with a croaked “God bless you, driver” — and the book’s depiction of sleekly silhouetted personae-people reclining into the Pellicle couches of their front-loading CityCars.
Any next-generation personal mobility system that didn’t take the needs and capabilities of people like these — no: these people, as individuals with lives and stories — into account…well, I can’t imagine that any such thing would be worth the very significant effort of bringing it into being. And despite some well-intentioned gestures toward the real urban world in the lattermost part of the book, projected mobility-on-demand sitings for Taipei and so on, there’s very little here that treats present-day reality as anything but something that Shall Be Overcome. It’s almost as if the very, very bright people responsible for Reinventing the Automobile have had to fend off any taint of human frailty, constraint or limitation in order to haul their total vision up into the light. (You want to ask, particularly, if any of them had ever read Aramis.)
Weirdly enough, the whiff of Gesamtkunstwerk I caught off of Reinventing reminded me of nothing so much as a work you’d be hard-pressed to think of as anything but its polar opposite, J.H. Crawford’s Carfree Cities. That, too, is a work where an ungodly amount of effort has been lavished on detailed depictions of the clean-slate future…and that, too, strikes me as refusing to engage the world as it is.
Maybe I wind up so critical of these dueling visions of future cities and mobility in them precisely because they are total solutions, and I’m acutely aware of my own weakness for and tendency toward same. I don’t think I’d mind, at all, living in one of Crawford’s carfree places, nor can I imagine that the MIT cityscape would be anything but an improvement on the status quo (if the devil was hauled out of its details and treated to a righteous ass-whupping). But to paraphrase one of my favorite philosophers, you go to the future with the cities, vehicles and people you have, not the ones you want. I have to imagine — have to — that the truly progressive and meaningful mobility intervention has a lot more to do with building on what people are already doing, and that’s even stipulating the four points above.
Bolt-on kits. Adaptive reuse. Provisional and experimental rezoning. Frameworks, visualizations and models that incorporate existing systems and assets, slowly revealing them (to users, planners, onlookers) to be nothing other than the weavings of a field, elements of a transmobility condition. And maybe someone whose job it is to account for everyone sidelined by the sleek little pods, left out of the renderings when the New Mobility was pitched to its sponsors.
Bottom line: this book is totally worth buying, reading and engaging if you have even the slightest interest in this topic. Its spinal arguments are very well framed, very clearly articulated, constructed in a way that makes them very difficult to mount cogent objections to…and almost certainly irrelevant to the way personal urban mobility is going to evolve, at least at the level of whole systems. And that’s the trouble, really, because so much of the value in the system described in these pages only works as a holism.
Like my every other negotiation with Bill Mitchell’s thought, including both engagements with his work and encounters in person, I want to be convinced. I want to believe. I want to be seduced by the optimism and the confidence that these are the right answers. But ultimately, as on those other occasions, I’m left with the sense that there are some important questions that have gone unasked, and which could not in any event have been satisfactorily answered in the framework offered. It may or may not say more about me than it does about anything else, but I just can’t see how the folks on the 33 Stanyan fit into the MIT futurama.