The following is section 4 of “Against the smart city,” the first part of The City Is Here For You To Use. Our Do projects will be publishing “Against the smart city” in stand-alone POD pamphlet and Kindle editions later on this month.
4 | The smart city pretends to an objectivity, a unity and a perfect knowledge that are nowhere achievable, even in principle.
Of the major technology vendors working in the field, Siemens makes the strongest and most explicit statement of the philosophical underpinnings on which their (and indeed the entire) smart-city enterprise is founded: “Several decades from now cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service…The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”
We’ve already considered what kind of ideological work is being done when efforts like these are positioned as taking place in some proximate future. The claim of perfect competence Siemens makes for its autonomous IT systems, though, is by far the more important part of the passage. It reflects a clear philosophical position, and while this position is more forthrightly articulated here than it is anywhere else in the smart-city literature, it is without question latent in the work of IBM, Cisco and their peers. Given its foundational importance to the smart-city value proposition, I believe it’s worth unpacking in some detail.
What we encounter in this statement is an unreconstructed logical positivism, which, among other things, implicitly holds that the world is in principle perfectly knowable, its contents enumerable, and their relations capable of being meaningfully encoded in the state of a technical system, without bias or distortion. As applied to the affairs of cities, it is effectively an argument there is one and only one universal and transcendently correct solution to each identified individual or collective human need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something which can be encoded in public policy, again without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)
Every single aspect of this argument is problematic.
— Perfectly knowable, without bias or distortion: Collectively, we’ve known since Heisenberg that to observe the behavior of a system is to intervene in it. Even in principle, there is no way to stand outside a system and take a snapshot of it as it existed at time T.
But it’s not as if any of us enjoy the luxury of living in principle. We act in historical space and time, as do the technological systems we devise and enlist as our surrogates and extensions. So when Siemens talks about a city’s autonomous systems acting on “perfect knowledge” of residents’ habits and behaviors, what they are suggesting in the first place is that everything those residents ever do — whether in public, or in spaces and settings formerly thought of as private — can be sensed accurately, raised to the network without loss, and submitted to the consideration of some system capable of interpreting it appropriately. And furthermore, that all of these efforts can somehow, by means unspecified, avoid being skewed by the entropy, error and contingency that mark everything else that transpires inside history.
Some skepticism regarding this scenario would certainly be understandable. It’s hard to see how Siemens, or anybody else, could avoid the slippage that’s bound to occur at every step of this process, even under the most favorable circumstances imaginable.
However thoroughly Siemens may deploy their sensors, to start with, they’ll only ever capture the qualities about the world that are amenable to capture, measure only those quantities that can be measured. Let’s stipulate, for the moment, that these sensing mechanisms somehow operate flawlessly, and in perpetuity. What if information crucial to the formulation of sound civic policy is somehow absent from their soundings, resides in the space between them, or is derived from the interaction between whatever quality of the world we set out to measure and our corporeal experience of it?
Other distortions may creep into the quantification of urban processes. Actors whose performance is subject to measurement may consciously adapt their behavior to produce metrics favorable to them in one way or another. For example, a police officer under pressure to “make quota” may issue citations for infractions she would ordinarily overlook; conversely, her precinct commander, squeezed by City Hall to present the city as an ever-safer haven for investment, may downwardly classify felony assault as a simple misdemeanor. This is the phenomenon known to viewers of The Wire as “juking the stats,” and it’s particularly likely to happen when financial or other incentives are contingent on achieving some nominal performance threshold. Nor is it the only factor likely to skew the act of data collection; long, sad experience suggests that the usual array of all-too-human pressures will continue to condition any such effort. (Consider the recent case in which Seoul Metro operators were charged with using CCTV cameras to surreptitiously ogle women passengers, rather than scan platforms and cars for criminal activity as intended.)
What about those human behaviors, and they are many, that we may for whatever reason wish to hide, dissemble, disguise, or otherwise prevent being disclosed to the surveillant systems all around us? “Perfect knowledge,” by definition, implies either that no such attempts at obfuscation will be made, or that any and all such attempts will remain fruitless. Neither one of these circumstances sounds very much like any city I’m familiar with, or, for that matter, would want to be.
And what about the question of interpretation? The Siemens scenario amounts to a bizarre compound assertion that each of our acts has a single salient meaning, which is always and invariably straightforwardly self-evident — in fact, so much so that this meaning can be recognized, made sense of and acted upon remotely, by a machinic system, without any possibility of mistaken appraisal.
The most prominent advocates of this approach appear to believe that the contingency of data capture is not an issue, nor is any particular act of interpretation involved in making use of whatever data is retrieved from the world in this way. When discussing their own smart-city venture, senior IBM executives argue, in so many words, that “the data is the data”: transcendent, limpid and uncompromised by human frailty. This mystification of “the data” goes unremarked upon and unchallenged not merely in IBM’s material, but in the overwhelming majority of discussions of the smart city. But different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few meters. Perceptions of risk in a neighborhood can be transformed by altering the taxonomy used to classify reported crimes ever so slightly. And anyone who’s ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey. The fact is that the data is never “just” the data, and to assert otherwise is to lend inherently political and interested decisions regarding the act of data collection an unwonted gloss of neutrality and dispassionate scientific objectivity.
The bold claim of perfect knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it’s astonishing that anyone would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful.
— One and only one solution: With their inherent, definitional diversity, layeredness and complexity, we can usefully think of cities as tragic. As individuals and communities, the people who live in them hold to multiple competing and equally valid conceptions of the good, and it’s impossible to fully satisfy all of them at the same time. A wavefront of gentrification can open up exciting new opportunities for young homesteaders, small retailers and craft producers, but tends to displace the very people who’d given a neighborhood its character and identity. An increased police presence on the streets of a district reassures some residents, but makes others uneasy, and puts yet others at definable risk. Even something as seemingly straightforward and honorable as an anticorruption initiative can undo a fabric of relations that offered the otherwise voiceless at least some access to local power. We should know by now that there are and can be no Pareto-optimal solutions for any system as complex as a city.
— Arrived at algorithmically: Assume, for the sake of argument, that there could be such a solution, a master formula capable of resolving all resource-allocation conflicts and balancing the needs of all a city’s competing constituencies. It certainly would be convenient if this golden mean could be determined automatically and consistently, via the application of a set procedure — in a word, algorithmically.
In urban planning, the idea that certain kinds of challenges are susceptible to algorithmic resolution has a long pedigree. It’s already present in the Corbusian doctrine that the ideal and correct ratio of spatial provisioning in a city can be calculated from nothing more than an enumeration of the population, it underpins the complex composite indices of Jay Forrester’s 1969 Urban Dynamics, and it lay at the heart of the RAND Corporation’s (eventually disastrous) intervention in the management of 1970s New York City. No doubt part of the idea’s appeal to smart-city advocates, too, is the familial resemblance such an algorithm would bear to the formulae by which commercial real-estate developers calculate air rights, the land area that must be reserved for parking in a community of a given size, and so on.
In the right context, at the appropriate scale, such tools are surely useful. But the wholesale surrender of municipal management to an algorithmic toolset — for that is surely what is implied by the word “autonomous” — would seem to repose an undue amount of trust in the party responsible for authoring the algorithm. At least, if the formulae at the heart of the Siemens scenario turn out to be anything at all like the ones used in the current generation of computational models, critical, life-altering decisions will hinge on the interaction of poorly-defined and surprisingly subjective values: a “quality of life” metric, a vague category of “supercreative” occupations, or other idiosyncrasies along these lines. The output generated by such a procedure may turn on half-clever abstractions, in which a complex circumstance resistant to direct measurement is represented by the manipulation of some more easily-determined proxy value: average walking speed stands in for the more inchoate “pace” of urban life, while the number of patent applications constitutes an index of “innovation.”
Even beyond whatever doubts we may harbor as to the ability of algorithms constructed in this way to capture urban dynamics with any sensitivity, the element of the arbitrary we see here should give us pause. Given the significant scope for discretion in defining the variables on which any such thing is founded, we need to understand that the authorship of an algorithm intended to guide the distribution of civic resources is itself an inherently political act. And at least as things stand today, neither in the Siemens material nor anywhere else in the smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.
— Encoded in public policy, and applied transparently, dispassionately and in a manner free from politics: A review of the relevant history suggests that policy recommendations derived from computational models are only rarely applied to questions as politically sensitive as resource allocation without some intermediate tuning taking place. Inconvenient results may be suppressed, arbitrarily overridden by more heavily-weighted decision factors, or simply ignored.
The best-documented example of this tendency remains the work of the New York City-RAND Institute, explicitly chartered to implant in the governance of New York City “the kind of streamlined, modern management that Robert McNamara applied in the Pentagon with such success” during his tenure as Secretary of Defense (1961-1968). The statistics-driven approach that McNamara’s Whiz Kids had so famously brought to the prosecution of the war in Vietnam, variously thought of as “systems analysis” or “operations research,” was first applied to New York in a series of studies conducted between 1973 and 1975, in which RAND used FDNY incident response-time data to determine the optimal distribution of fire stations.
Methodological flaws undermined the effort from the outset. RAND, for simplicity’s sake, chose to use the time a company arrived at the scene of a fire as the basis of their model, rather than the time at which that company actually began fighting the fire; somewhat unbelievably, for anyone with the slightest familiarity with New York City, RAND’s analysts then compounded their error by refusing to acknowledge traffic as a factor in response time. Again, we see some easily-measured value used as a proxy for a reality that is harder to quantify, and again we see the distortion of ostensibly neutral results by the choices made by an algorithm’s designers. But the more enduring lesson for proponents of data-driven policy has to do with how the study’s results were applied. Despite the mantle of coolly “objective” scientism that systems analysis preferred to wrap itself in, RAND’s final recommendations bowed to factionalism within the Fire Department, as well as the departmental leadership’s need to placate critical external constituencies; the exercise, in other words, turned out to be nothing if not political.
The consequences of RAND’s intervention were catastrophic. Following their recommendations, fire battalions in some of the most vulnerable sections of the city were decommissioned, while the department opened other stations in low-density, low-threat areas; the spatial distribution of firefighting assets remaining actually prevented resources from being applied where they were most critically needed. Great swaths of the city’s poorest neighborhoods burned to the ground as a direct result — most memorably the South Bronx, but immense tracts of Manhattan and Brooklyn as well. Hundreds of thousands of residents were displaced, many permanently, and the unforgettable images that emerged fueled perceptions of the city’s nigh-apocalyptic unmanageability that impeded its prospects well into the 1980s. Might a less-biased model, or a less politically-skewed application of the extant findings, have produced a more favorable outcome? This obviously remains unknowable…but the human and economic calamity that actually did transpire is a matter of public record.
Examples like this counsel us to be wary of claims that any autonomous system will ever be entrusted with the regulation and control of civic resources — just as we ought to be wary of claims that the application of some single master algorithm could result in an Pareto-efficient distribution of resources, or that the complex urban ecology might be sufficiently characterized in data to permit the effective operation of such an algorithm in the first place. For all of the conceptual flaws we’ve identified in the Siemens proposition, though, it’s the word “goal” that just leaps off the page. In all my thinking about cities, it has frankly never occurred to me to assert that cities have goals. (What is Cleveland’s goal? Karachi’s?) What is being suggested here strikes me as a rather profound misunderstanding of what a city is. Hierarchical organizations can be said to have goals, certainly, but not anything as heterogeneous in composition as a city, and most especially not a city in anything resembling a democratic society.
By failing to account for the situation of technological devices inside historical space and time, the diversity and complexity of the urban ecology, the reality of politics or, most puzzlingly of all, the “normal accidents” all complex systems are subject to, Siemens’ vision of cities perfectly regulated by autonomous smart systems thoroughly disqualifies itself. But it’s in this depiction of a city as an entity with unitary goals that it comes closest to self-parody.
If it seems like breaking a butterfly on a wheel to subject marketing copy to this kind of dissection, I am merely taking Siemens and the other advocates of the smart city at their word, and this is what they (claim to) really believe. When pushed on the question, of course, some individuals working for enterprises at the heart of the smart-city discourse admit that what their employers actually propose to do is distinctly more modest: they simply mean to deploy sensors on municipal infrastructure, and adjust lighting levels, headway or flow rates to accommodate real-time need. If this is the case, perhaps they ought to have a word with their copywriters, who do the endeavor no favors by indulging in the imperial overreach of their rhetoric. As matters now stand, the claim of perfect competence that is implicit in most smart-city promotional language — and thoroughly explicit in the Siemens material — is incommensurate with everything we know about the way technical systems work, as well as the world they work in. The municipal governments that constitute the primary intended audience for materials like these can only be advised, therefore, to approach all such claims with the greatest caution.
 For example, in New York City, an anonymous survey of “hundreds of retired high-ranking [NYPD] officials” found that “tremendous pressure to reduce crime, year after year, prompted some supervisors and precinct commanders to distort crime statistics” they submitted to the centralized COMPSTAT system. Chen, David W., “Survey Raises Questions on Data-Driven Policy,” The New York Times, 08 February 2010.
 Simon, David, Kia Corthron, Ed Burns and Chris Collins, The Wire, Season 4, Episode 9: “Know Your Place,” first aired 12 November 2006.
 Fletcher, Jim, IBM Distinguished Engineer, and Guruduth Banavar, Vice President and Chief Technology Officer for Global Public Sector, personal communication, 08 June 2011.
 Migurski, Michal. “Visualizing Urban Data,” in Segaran, Toby and Jeff Hammerbacher, Beautiful Data: The Stories Behind Elegant Data Solutions, O’Reilly Media, Sebastopol CA, 2012: pp. 167-182. See also Migurski, Michal. “Oakland Crime Maps X,” tecznotes, 03 March 2008.
 See, as well, Sen’s dissection of the inherent conflict between even mildly liberal values and Pareto optimality. Sen, Amartya Kumar. “The impossibility of a Paretian liberal.” Journal of Political Economy Volume 78 Number 1, Jan-Feb 1970.
 Forrester, Jay. Urban Dynamics, The MIT Press, Cambridge, MA, 1969.
 See Flood, Joe. The Fires: How a Computer Formula Burned Down New York City — And Determined The Future Of American Cities, Riverhead Books, New York, 2010.
 See, e.g. Bettencourt, Luís M.A. et al. “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences, Volume 104 Number 17, 24 April 2007, pp. 7301-7306.
 Flood, ibid., Chapter Six.
 Rider, Kenneth L. “A Parametric Model for the Allocation of Fire Companies,” New York City-RAND Institute report R-1615-NYC/HUD, April 1975; Kolesar, Peter. “A Model for Predicting Average Fire Company Travel Times,” New York City-RAND Institute report R-1624-NYC, June 1975.
 Perrow, Charles. Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984.
An article I was commissioned to write for the Touch issue of What’s Next magazine.
What does it mean for a text to be digital?
In principle, it can be replicated in perfect fidelity, and transmitted to an unlimited number of recipients worldwide, at close to zero cost. Powerful analytic tools can be brought to bear on it, and our reading of it. It can be compared against other texts, plumbed for clues as to its provenance and authorship. Each of our acts of engagement with it — whether of acquisition, reading, or annotation — can be shared with our social networks, mobilized as props in an ongoing performance of self. Above all, it becomes (to use the jargon practically unavoidable in any discussion of information technology) “platform-agnostic.” This is to say that it becomes independent, to a very great degree, of the physical medium in which it currently happens to be instantiated.
To varying degrees, these things have been true as long as words have been encoded in ones and zeroes — certainly since 1971, when Project Gutenberg was founded with the intention of digitizing as much of the world’s literature as possible, and making it all available for free. Why is it the case, then, that digital books only seem to have entered our lives in any major way in the last two or three years?
The apparently sudden arrival of the digital text likely owes something to the top-of-mind quality Amazon currently enjoys in its main markets, its name and value proposition as prominent in our awareness as those of the grocery chains, television networks or airlines we patronize — a presence it’s taken the company the better part of the last fifteen years to build up. And it surely has something to do with the widespread popular facility with the tropes and metaphors governing our engagement with digital content of all sorts that has developed over the same period of time, to the point that it’s increasingly hard to meet a grandparent inconversant with downloads, torrents and the virtues of cloud storage.
But the fundamental reason is probably that bit about platform-agnosticism. Anyone so inclined could have “engaged digital text” on a conventional computer at any point in the past forty years. But the act of reading didn’t — and maybe couldn’t — properly come into its own in the digital era until there was a platform for literature as present to the senses as paper itself, something as well-suited to the digital text as the road is to the automobile. I refer, of course, to the networked tablet.
It’s only with the widespread embrace of these devices that digital reading has become ubiquitous. Relatively inexpensive, lightweight and comfortable in the hand, capable of storing thousands of volumes, the merits of the tablet as reading environment may strike us as self-evident. But there’s another factor that underlies its general appeal, and that is the specific phenomenology of the way we manipulate reading material when using one.
We read text on a tablet as pixels, just as we would on any screen. But the ways in which we physically address and move through a body of such pixels have more in common with the behaviors we learned from books in earliest childhood than with anything we picked up in the course of later encounters with computers. This is why the post-PC tablet feels more “intuitive” to us, despite the frank novelty of the gestures we must learn in order to use it, and which no book in the world has ever afforded: the swipe, the drag, the pinch, the tap.
This is the new tactility of reading. But where there are comparatively few semantically-meaningful ways in which the reader’s hand can meet the pages of a material book, the experience of engaging a digital text with the finger is subject to a certain variability. It’s not a boundless freedom — it’s delimited on one side by technological limitations, and on the other by the choices of an interaction designer — but it does require explication.
The first order of variability is the screen medium itself. Each of the major touchscreen technologies available — resistive, capacitive, projective-capacitive, optical — imposes its own constraints on the latency and resolution with which a screen registers a touch, and therefore how long one must place one’s finger against it to turn a page or select a word for definition or a passage for annotation. Reading on a good screen feels effortless, even transparent — but particularly high latency or low resolution can easily disrupt the flow of experience, lifting the reader up and out of the text entirely.
The second is the treatment of type. As critical as it is to the legibility and emotional resonance of a text, and even at the higher resolutions now theroetically available, typography is all but invariably treated as though it had not been refined over five centuries. It still feels like we are many years and product versions away from type on the tablet rendered with the craft and care it deserves.
A third order of variability consists in the separation of content, style and interface elements inherent in contemporary application design. This means that both the meaning of gestural interactions and the treatment of the page itself can vary from environment to environment. Especially given the pressure developers are under to differentiate their products from one another, a tap in the Kindle for iPad application may not mean precisely what a tap in Readmill or Instapaper or Reeder does, or work in at all the same way.
In fact, something as simple and as basic to the act of reading as turning a page is handled differently in all of these contexts.
Originally, of course, the pagination of text was an artifact of necessity, something imposed by running a semantically continuous text across a physically discontinuous quantity of leaves. One might think, therefore, that pagination would be among the first things to go in making the leap to the digital reading environment, but contemporary applications tend to retain it as a skeuomorphism, larding down the interaction with animated page curls and sound effects.
On the Kindle proper, the reader presses a button — one for page forward, another for page back — and the entire screen blanks and refreshes as the new page loads, a transition imposed by the nature of electronic pigment. In the Kindle app, by contrast, the page slides right to left, slipping from future to present to past in a series of discrete taps.
The Instapaper application is, perhaps, truest to the nature of digital copy. It dispenses with all of this, and treats the document as one continuous environment: swipe upward when you’re ready for more. Instapaper is an acknowledgment of the text’s liberation from the constraints of crude matter. Handled this way, there’s no reason a digital text can’t return to something approximating the book’s earliest form, a scroll — in this case, one capable of unspooling without limit.
Finally, we also need to account for what it means to absorb text as a luminous projection. Marshall McLuhan drew a distinction between “light-on” media — that is, those in which content inscribed on a passive surface like paper is illuminated by an external light source — and “light-through” media, like our luminous tablets; per his insistence that medium is coextensive with message, we can assume that the selfsame text consumed in these two ways would be received differently, emotionally every bit as much as cognitively.
As it happens, I have both an actual, e-paper Kindle — digital, but nevertheless light-on — and Kindle applications for the eminently light-through iPhone and iPad. And purely anecdotally, it does seem to be the case that I have an easier time with thornier, weightier reading on the e-paper device. Novels are fine on the iPad, even on my phone…but if I want to wrestle with Graham Harman or Susan Sontag, I reach for the Kindle.
The McLuhanite in me frets that, in embracing the tablet, we inadvertently give up much of our engagement with the text. That beyond sentimentality, there is something about the act of turning a page to punctuate a thought, or the phenomenology of light reflecting off of paper saturated with ink, that conditions the act of reading and makes it what we recognize it to be, at some level beneath the threshold of conscious perception.
Which brings us back, at last, to the printed artifact. We can acknowldge that the networked tablet is a brilliant addition to any reader’s instrumentarium. I’m certain that it increases the number of times and places at which people read, and know from long, intimate and sorrowful personal experience the difference it makes where the portability of entire libraries is concerned. But it’s not quite the same thing as a book or a magazine, and cannot entirely replace them.
Curiously enough, the ambitions to which paper appears to remain best-suited are diametrically opposite:
On the one hand, deep, thoughtful engagement with a body of language, an engagement that fully leverages the craft of bookmaking. In this pursuit, the tablet cannot yet offer nearly the typographic nicety, conscious design for legibility or perceptual richness trivially available from ink on paper — all of the things, in other words, that permit the reader to immerse herself for longer, and with less strain.
But there are also occasions on which surface is all important, where the ostensible content is almost incidental to the qualities of its packaging. Here the texture or other phenomenological qualities of paperstock itself — even its smell — communicate performatively; I think of glossy lifestyle magazines. It’s hard to imagine any tablet or similar device affording these virtues in anything like the near term.
If we understand a book as a container, the precise shape that container takes ought to reflect the nature of its intended contents, and what one proposes to do with them. In acknowledging all the many virtues of networked, digital texts, the texture, tooth and heft of paper will ensure that for at least the contexts I’ve specified here, it remains irreplaceable among all the ways we contain thought as it flows from one human mind to another.
The other day I got mail asking me to contribute to something called usesthis, a site that asks a (frankly fairly homogeneous) selection of creative workers to describe their “setup” — or, in other words, the combination of hardware and software they use on a daily basis — as well as their ideal such arrangement.
I’m always happy enough for a prompt to think in this direction. Although usesthis isn’t really (no pun intended) set up to examine these issues, the whole question of a relationship between creative output and one’s choice of tools is inherently interesting, and is kind of an ongoing preoccupation of mine. As a good connectionist, I’m bound to believe that the artifacts we use mediate or allow us to approach the world in certain specific ways. It follows from this that our selection of one particular tool over another conditions the kind of relations we’re able to enter into — but also, that if the tool is functioning properly, we’re ordinarily unaware of its operations, or of this potential it has to constrain or to open.
If we’re inclined to examine that potential, a rigorous accounting for the intermediators we choose can help us rise up out of the usual, unconscious relation we have to them, and restore the sense of interested inquiry Heidegger (at least) calls presence-at-hand — see Peter Erdélyi’s foreword to The Prince and The Wolf for a particularly pungent version of this.
There’s a lot to say, too, about the determinisms implicit in our selection of specific tools. Very often, particular methods and tools tell in the finished work; it’s not simply, then, that mediating artifacts shape our own ability to act in the world, it’s that they indirectly condition the experience of everyone who comes into contact with the result of that action thereafter. (I’m put in mind of Matthew Fuller and Usman Haque’s prescient comment, in their Situated Technologies pamphlet Urban Versioning System 1.0, that “[i]t is often possible to determine, admittedly more so in a building than in a neighborhood, whether it was designed using AutoCAD, Microstation or Vectorworks.”)
I think it’s relatively easy to see what this means for creative domains like fashion, music, or (as the Fuller/Haque quote implies) architecture. Take the work of Issey Miyake, for example. We can trace the very different ways in which A-POC and the superficially similar Pleats Please line are perceived (by the wearer, by the observer) to specific techniques used in their creation, observe that the material qualities of Pleats Please garments result from polyester fabric being subjected to a particular heat-press process. The way the garment drapes on the body is the direct result of the cloth’s having been shaped by a particular regime of temperature, constraint and pressure — a regime which is in turn brought into local being by a highly particularized set of tools. If you’re interested in understanding why the Pleats Please line tends to appeal to women d’un certain âge, some consideration of how the designer’s understanding of the body is mediated to the body via the deployment of those tools seems indispensable.
Similarly, albeit in a rather different register, it strikes me as being very difficult to discuss Stephen O’Malley‘s work without understanding at least a little something about drop-tuning, .68-gauge strings and the performance envelope of the Sunn Model T amplifier. The unique somatic (SOMAtic?) experience of a SUNN 0))) gig is contingent on these elements — these things — being present, assembled and wielded in a particular way. The affordances and constraints of the objects yoked together in the act of production are directly relevant to the phenomenology of the finished product, even if that “product” is a ten-minute excursion in dronespace.
Casting light on the mesh of associations that bring a Pleats Please garment or a SUNN O))) cut into being does tend to construct creativity a little bit differently than we have traditionally been used to, and I think that’s entirely legitimate. Instead of positioning creation as the act of a lone genius, this way of looking at things suggests that the ability to bring novelty forth is, instead, something that’s smeared out across a network of heterogeneous participants, both human and non-human. This is certainly a decentering of the individual designer, but by no means do I necessarily think of it as an insult. It merely suggests that in those domains where creative production does require the enlistment of such ensembles, exceptional designerly talent ought properly be understood as the specific genius of knowing how to activate, and enable the operations of, such an ensemble — something more akin to orchestration than anything else. In this light, there’s still a great deal to be discovered by poking into the specifics of a given ensemble, and asking how each is brought to bear on the task of creation.
For those of us who work primarily in the medium of words, though, the case isn’t as clearcut.
It’s not as if at least some descriptions of the writer’s toolkit aren’t of interest. Here’s John Brunner, in the final words of his 1968 Stand on Zanzibar:
“This non-novel was brought to you by John Brunner using Spicer Plus Fabric Bond and Commercial Bank papers interleaved with Serillo carbons in a Smith Corona 250 electric typewriter fitted with a Kolok black-record ribbon.”
This was a good McLuhanite, speaking to the formal concerns of the Pop moment. That invocation of brands carries along with it a certain zazzy quality, a sense of liberation experienced in and through commodities I associate with Warren Chalk’s 1964 Living City Survival Kit. (In 1968, as four years earlier, you could still plausibly argue that this was fresh and revelatory.) In this case, as it happens, more specific yet is better. So not just any Smith Corona 250, but John Brunner’s Smith Corona 250. It adds something — something ineffable, and if you know anything about Brunner’s life, ineffably sad — to your appreciation of his oeuvre to read what’s on the Dymo-tape labels he affixed to this daily working tool.
But that has more to do with the object as environment, and only invokes the Smith Corona 250′s material properties and other affordances in the rather attenuated sense that its front affords a surface on which to stick a label. This, of course, is a quality it has in common with a great many other objects that might have occupied the same space on Brunner’s desk. And this begins to get to the crux of what I find a little curious about asking writers about their “setup.”
For me, anyway, focusing on getting things just-so is very little other than a way of delaying the moment I actually settle down to do what I need to. Most of us have some such ritual; Matt Jones memorably describes this process of lining up one’s pencils and notebooks (in preference to actually using the former to write in the latter) as “shaving the yak.” I’ll admit that I also find it a little unseemly, at this point in history, to mention specific named brands and commercial offerings. I’m not Warren Chalk, this isn’t London in 1964, and I’m not performing a swingin’ly post-austerity self through my consumption of Canadian Club and Miles Davis sides. So while, yeah, sure, I use such-and-such a text editor, under a given operating system, running on a particular model of laptop, you won’t learn that much about me — or more to the point, develop any particularly salient insight into the structuration of the argument I’m trying to make — by having these specifics revealed to you. The blunt truth of things is that I would almost certainly be expressing these same sentiments were I working in Microsoft Word on the kind of thoroughly generic, commodity Windows machine the “wrong people” use. From this perspective, the ideal setup of tools is nothing but the one that most readily dissolves into intention. ‘Nuff said, yeah?
I really want to recommend to you this Olivier Thereaux post about broken bus systems and how they might be fixed (and not just because I happen to be taking the MUNI a great deal lately).
What Olivier absolutely nails is the expression of a thought I’ve come back to again and again over the years: that buses and bus networks are by their nature so intimidating to potential users that many people will do just about anything to avoid engaging them. I don’t mind admitting that, depending on the city, the language in use, and my relative level of energy, I’m definitely to be numbered among those people. When buses are effectively the only mode of public transit available, that “just about anything” has occasionally meant laying out ridiculous sums on taxis; more often, it’s resulted in my walking equally absurd distances across cities I barely know.
“Intimidating,” in this context, doesn’t need to mean “terrifying.” It simply implies that the system is just complicated enough, just hard enough to form a mental model of, that the fear of winding up miles away from your intended destination — and possibly with no clear return route, not enough or the right kind of money to pay for a ticket, and no way of asking for clarification — is a real thing. There’s a threshold of comfort involved, and for quite a few categories of users (the young, the old, visitors, immigrants, people with literacy or other impairments) that threshold is set too high. People in this position wind up seeking alternatives…and if practical alternatives do not exist, they do without mobility altogether. They are lost to the city, and the city is lost to them.
The point is more broadly applicable, as well. You know I believe that cities are connection machines, networks of potential subject to Metcalfe’s law. What this means in the abstract is that the total value of an urban network rises as the square of the number of nodes connected to it. What this means in human terms is that a situation in which people are too intimidated to ride the bus (or walk down the street, or leave the apartment) is a sorrow compounded. Again: everything they could offer the network that is the city is lost. And everything we take for granted about the possibilities and promise of great urban places is foreclosed to them.
If you understand things this way, there’s a clear moral imperative inscribed in the design of systems like bus networks and interfaces. Every incremental thing the designer can do to demystify, explain, clarify, and ultimately to lower the threshold at which a potential user decides the risk of climbing aboard is worth taking does a double service — if the Metcalfe’s law construction of things rings true to you, a geometrical service. You are simultaneously improving the conditions under which an individual lives his or her life, and contributing materially to the commonweal. Not bad for a day’s work, if you ask me.
This is personal for me, too, and not just because I’ve occasionally found a route map overwhelming, or decided to walk from Bloomsbury to Dalston instead of chancing the N38 and winding up in, who knows, Calais. What I’ve come to understand, in these last few years of intense concentration on issues of urban design, is that my fascination with cities grows not at all out of ease or comfort with them, but the opposite. I’m an introvert, I’ve never been comfortable approaching strangers with questions, I’m twitchily hyperaware when I’m inconveniencing others (e.g. holding up a bus by asking questions of a driver) and my gifts for language are not great. Above all, I don’t like looking vulnerable and confused any more than anyone does, especially when traveling.
I’ve gotten better on all these counts over the course of my life, but they’re still issues. They can pop to the surface at any time, and, of course, are more likely to do so under conditions of stress. Taken together, what they spell for me is a relatively circumscribed ability to get around and enjoy the things the cities I visit have to offer — relatively, that is, compared to other able-bodied people my own age and with similar levels of privilege. Even this limitation, though, makes me acutely aware of just how difficult getting around can be, how very intimidating it can all seem, and what both people and place stand to lose each and every single time this intimidation is allowed to govern outcomes.
This is why I believe Olivier is absolutely right to focus on design interventions that reduce user stress, and, with all due respect, it’s why I think people like this Speedbird commenter, who understand cities solely as generators of upside potential, are missing something in the empathy department. There are an awful lot of people, everywhere around us, in every city, who have difficulty negotiating the mobility (and other) systems that are supposed to serve their needs. As far as I’m concerned, anyway, it is the proper and maybe even the primary task of the urban systems designer to work with compassion and fearless empathy to address this difficulty. Only by doing so can we extend the very real promise of that upside potential to the greatest possible number of people who would otherwise be denied it, in part or in full, and only by doing so can we realize in turn the full flowering of what they have to offer us.
I’m halfway through Reinventing the Automobile at the moment, which I figure represents the final comprehensive statement of Bill Mitchell’s thinking about urban mobility. As you’d imagine, it’s a passionately-held and painstakingly worked-out vision, basically the summation of all the work anyone with an interest in the space has seen in dribs and drabs over the past few years; it’s clear, for example, that this is what all the work on P.U.M.A. and MIT CityCar was informed by and leading towards.
In outline, Reinventing presents the reader with four essential propositions about the nature of next-generation urban mobility, none of which I necessarily disagree with prima facie:
- That the design principles and assumptions underlying the contemporary automobile — descended as they are, in an almost straight line, from the horseless carriage — are badly obsolete. Specifically, industry conventions regarding a vehicle’s source of motive power, drive and control mechanism, and mode of operation ought to be discarded in their entirety and replaced with ones more appropriate to an age of dense cities, networks, lightweight materials, clean energy and great personal choice.
- That mobility itself is being transformed by information; that extraordinary efficiencies can be realized and tremendous amounts of latent value unlocked if passenger, vehicle and the ground against which both are moving are reconceived as sources and brokers of, and agents upon, real-time data. (Where have I heard that before?)
- That the physical and conceptual infrastructure underlying the generation, storage and distribution of energy is also, and simultaneously, being transformed by information, with implications (again) for the generation of motive power, as well as the provision of environmental, information, communication and entertainment services to vehicles.
- That the above three developments permit (compel?) the wholesale reconceptualization of vehicles as agents in dynamic pricing markets for energy, road-space and parking resources, as well as significantly more conventional vehicle-share schemes.
It’s only that last one that I have any particular quibbles with. Even before accounting for the creepy hints of emergent AI in commodity-trading software I keep bumping up against (and that’s only meant about 75% tongue-in-cheek), I’m not at all convinced that empowering mobile software avatars to bid on road resources in tightly-coupled, nanosecond loops will ever lead to anything but the worst and most literal sort of gridlock.
But that’s not the real problem I have with this body of work. What I really tripped over, as I read, was the titanic dissonance between the MIT vision of urban life and mobility and the one that I was immersed in as I rode the 33 bus across town. It’s a cheap shot, maybe, but I just couldn’t get past the gulf between the actual San Franciscans around me — the enormous, sweet-looking Polynesian kid lost in a half-hour-long spell of autistic head-banging that took him from Oak and Stanyan clear into the Mission; the grizzled but curiously sylphlike person of frankly indeterminate gender, stepping from the bus with a croaked “God bless you, driver” — and the book’s depiction of sleekly silhouetted personae-people reclining into the Pellicle couches of their front-loading CityCars.
Any next-generation personal mobility system that didn’t take the needs and capabilities of people like these — no: these people, as individuals with lives and stories — into account…well, I can’t imagine that any such thing would be worth the very significant effort of bringing it into being. And despite some well-intentioned gestures toward the real urban world in the lattermost part of the book, projected mobility-on-demand sitings for Taipei and so on, there’s very little here that treats present-day reality as anything but something that Shall Be Overcome. It’s almost as if the very, very bright people responsible for Reinventing the Automobile have had to fend off any taint of human frailty, constraint or limitation in order to haul their total vision up into the light. (You want to ask, particularly, if any of them had ever read Aramis.)
Weirdly enough, the whiff of Gesamtkunstwerk I caught off of Reinventing reminded me of nothing so much as a work you’d be hard-pressed to think of as anything but its polar opposite, J.H. Crawford’s Carfree Cities. That, too, is a work where an ungodly amount of effort has been lavished on detailed depictions of the clean-slate future…and that, too, strikes me as refusing to engage the world as it is.
Maybe I wind up so critical of these dueling visions of future cities and mobility in them precisely because they are total solutions, and I’m acutely aware of my own weakness for and tendency toward same. I don’t think I’d mind, at all, living in one of Crawford’s carfree places, nor can I imagine that the MIT cityscape would be anything but an improvement on the status quo (if the devil was hauled out of its details and treated to a righteous ass-whupping). But to paraphrase one of my favorite philosophers, you go to the future with the cities, vehicles and people you have, not the ones you want. I have to imagine — have to — that the truly progressive and meaningful mobility intervention has a lot more to do with building on what people are already doing, and that’s even stipulating the four points above.
Bolt-on kits. Adaptive reuse. Provisional and experimental rezoning. Frameworks, visualizations and models that incorporate existing systems and assets, slowly revealing them (to users, planners, onlookers) to be nothing other than the weavings of a field, elements of a transmobility condition. And maybe someone whose job it is to account for everyone sidelined by the sleek little pods, left out of the renderings when the New Mobility was pitched to its sponsors.
Bottom line: this book is totally worth buying, reading and engaging if you have even the slightest interest in this topic. Its spinal arguments are very well framed, very clearly articulated, constructed in a way that makes them very difficult to mount cogent objections to…and almost certainly irrelevant to the way personal urban mobility is going to evolve, at least at the level of whole systems. And that’s the trouble, really, because so much of the value in the system described in these pages only works as a holism.
Like my every other negotiation with Bill Mitchell’s thought, including both engagements with his work and encounters in person, I want to be convinced. I want to believe. I want to be seduced by the optimism and the confidence that these are the right answers. But ultimately, as on those other occasions, I’m left with the sense that there are some important questions that have gone unasked, and which could not in any event have been satisfactorily answered in the framework offered. It may or may not say more about me than it does about anything else, but I just can’t see how the folks on the 33 Stanyan fit into the MIT futurama.
Google’s recent announcement of App Inventor is one of those back-to-the-future moments that simultaneously stirs up all kinds of furtive and long-suppressed hope in my heart…and makes me wonder just what the hell has taken so long, and why what we’re being offered is still so partial and wide of the mark.
I should explain. At its simplest, App Inventor does pretty much what it says on the tin. The reason it’s generating so much buzz is because it offers the non-technically inclined, non-coders among us an environment in which we can use simple visual tools to create reasonably robust mobile applications from scratch — in this case, applications for the Android operating system.
In this, it’s another step toward a demystification and user empowerment that had earlier been gestured at by scripting environments like Apple’s Automator and (to a significantly lesser degree) Yahoo! Pipes. But you used those things to perform relatively trivial manipulations on already-defined processes. I don’t want to overstate its power, especially without an Android device of my own to try the results on, but by contrast you use App Inventor to make real, usable, reusable applications, at a time when we understand our personal devices to be little more than a scrim on which such applications run, and there is a robust market for them.
This is radical thing to want to do, in both senses of that word. In its promise to democratize the creation of interactive functionality, App Inventor speaks to an ambition that has largely lain dormant beneath what are now three or four generations of interactive systems — one, I would argue, that is inscribed in the rhetoric of object-oriented programming itself. If functional units of executable code can be packaged in modular units, those units in turn represented by visual icons, and those icons presented in an environment equipped with drag-and-drop physics and all the other familiar and relatively easy-to-grasp interaction cues provided us by the graphical user interface…then pretty much anybody who can plug one Lego brick into another has what it takes to build a working application. And that application can both be used “at home,” by the developer him- or herself, and released into the wild for others to use, enjoy, deconstruct and learn from.
There’s more to it than that, of course, but that’s the crux of what’s at stake here in schematic. And this is important because, for a very long time now, the corpus of people able to develop functionality, to “program” for a given system, has been dwindling as a percentage of interactive technology’s total userbase. Each successive generation of hardware from the original PC onward has expanded the userbase — sometimes, as with the transition from laptops to network-enabled phones, by an order of magnitude or more.
The result, unseemly to me, is that some five billion people on Earth have by now embraced interactive networked devices as an intimate part of their everyday lives, while the tools and languages necessary to develop software for them have remained arcane, the province of a comparatively tiny community. And the culture that community has in time developed around these tools and languages? Highly arcane — as recondite and unwelcoming, to most of us, as a klatsch of Comp Lit majors mulling phallogocentrism in Derrida and the later works of M.I.A.
A further consequence of this — unlooked-for, perhaps, but no less significant for all of that — is that the community of developers winds up having undue influence over how users conceive of interactive devices, and the kinds of things they might be used for. Alan Kay’s definition of full technical literacy, remember, was the ability to both read and write in a given medium — to create, as well as consume. And by these lights, we’ve been moving further and further away from literacy and the empowerment it so reliably entrains for a very long time now.
So an authoring environment that made creation as easy as consumption — especially one that, like View Source in the first wave of Web browsers, exposed something of how the underlying logical system functioned — would be a tremendous thing. Perhaps naively, I thought we’d get something like this with the original iPhone: a latterday HyperCard, a tool lightweight and graphic and intuitive as the device itself, but sufficiently powerful that you could make real things with it.
Maybe that doesn’t mesh with Apple’s contemporary business model, though, or stance regarding user access to deeper layers of device functionality, or whatever shoddy, paternalistic rationale they’ve cooked up this week to justify their locking iOS against the people who bought and paid for it. And so it’s fallen to Google, of all institutions, to provide us with the radically democratizing thing; the predictable irony, of course, is that in look and feel, the App Inventor composition wizard is so design-hostile, so Google-grade that only the kind of engineer who’s already comfortable with more rigorous development alternatives is likely to find it appealing. The idea is, mostly, right…but the execution is so very wrong.
There’s a deeper issue still, though, which is why I say “mostly right.” Despite applauding any and every measure that democratizes access to development tools, in my heart of hearts I actually think “apps” are a moribund way of looking at things. That the “app economy” is a dead end, and that even offering ordinary people the power to develop real applications is something of a missed opportunity.
Maybe that’s my own wishful thinking: I was infected pretty early on with the late Jef Raskin’s way of thinking about interaction, as explored in his book The Humane Interface and partially instantiated in the Canon Cat. What I took from my reading of Raskin is the notion that chunking up the things we do into hard, modal “applications” — each with a discrete user interface, each (still!) requiring time to load, each presenting us with a new learning curve — is kind of foolish, especially when there are a core set of operations that will be common to virtually everything you want to do with a device. Some of this thinking survives in the form of cross-application commands like Cut, Copy and Paste, but still more of it has seemingly been left by the wayside.
There are ways in which Raskin’s ideas have dated poorly, but in others his principles are as relevant as ever. I personally believe that, if those of us who conceive of and deliver interactive experiences truly want to empower a userbase that is now on the order of billions of people, we need to take a still deeper cut at the problem. We need to climb out of the application paradigm entirely, and figure out a better and more accessible way of representing distributed computational processes and how to get information into and out of them. And we need to do this now, because we can clearly see that those interactive experiences are increasingly taking place across and between devices and platforms — at first for those of us in the developed world, and very soon now, for everyone.
In other words, I believe we need to articulate a way of thinking about interactive functionality and its development that is appropriate to an era in which virtually everyone on the planet spends some portion of their day using networked devices; to a context in which such devices and interfaces are utterly pervasive in the world, and the average person is confronted with a multiplicity of same in the course of a day; and to the cloud architecture that undergirds that context. Given these constraints, neither applications nor “apps” are quite going to cut it.
Accordingly, in my work at Nokia over the last two years, I’ve been arguing (admittedly to no discernible impact) that as a first step toward this we need to tear down the services we offer and recompose them from a kit of common parts, an ecology of free-floating, modular functional components, operators and lightweight user-interface frameworks to bind them together. The next step would then be to offer the entire world access to this kit of parts, so anyone at all might grab a component and reuse it in a context of their own choosing, to develop just the functionality they or their social universe require, recognize and relate to. If done right, then you don’t even need an App Inventor, because the interaction environment itself is the “inventor”: you grab the objects you need, and build what you want from them.
One, two, many Facebooks. Or Photoshops. Or Tripits or SketchUps or Spotifys. All interoperable, all built on a framework of common tools, all producing objects in turn that could be taken up and used by any other process in the weave.
This approach owes something to Ben Cerveny’s seminal talk at the first Design Engaged, though there he was primarily concerned with semantically-tagged data, and how an ecosystem of distributed systems might make use of it. There’s something in it that was first sparked by my appreciation of Jun Rekimoto’s Data Tiles, and it also has some underlying assumptions in common with the rhetoric around “activity streams.” What I ultimately derive from all of these efforts is the thought that we (yes: challenge that “we”) ought to be offering the power of ad-hoc process definition in a way that any one of us can wrap our heads around, which would in turn underwrite the most vibrant, fecund/ating planetary ecosystem of such processes.
In this light, Google’s App Inventor is both a wonderful thing, and a further propping-up of what I’m bound to regard as a stagnating and unhelpful paradigm. I’m both excited to see what people do with it, and more than a little saddened that this is still the conversation we’re having, here in 2010.
There is one further consideration for me here, though, that tends to soften the blow. Not that I’m at all comparing myself to them, in the slightest, but I’m acutely aware of what happens to the Ted Nelsons and Doug Engelbarts of the world. I’ve seen what comes of “visionaries” whose insight into how things ought to be done is just that little bit too far ahead of the curve, how they spend the rest of their careers (or lives) more or less bitterly complaining about how partial and unsatisfactory everything that actually does get built turned out to be. If all that happens is that App Inventor and its eventual, more aesthetically well-crafted progeny do help ordinary people build working tools, firmly within the application paradigm, I’ll be well pleased — well pleased, and no mistake. But in some deeper part of me, I’ll always know that we could have gone deeper still, taken on the greater challenge, and done better by the people who use the things we make.
We still can.