Consider the driverless car, as currently envisioned by Google.
That I can tell, anyway, most discussion of its prospects, whether breathlessly anticipatory or frankly horrendified, is content to weigh it more or less as given. But as I’m always harping on about, I just don’t believe we usefully understand any technology in the abstract, as it sits on a smoothly-paved pad in placid Mountain View. To garner even a first-pass appreciation for the contours of its eventual place in our lives, we have to consider what it would work like, and how people would experience it, in a specified actual context. And so here — as just such a first pass, at least — I try to imagine what would happen if autonomous vehicles like those demo’ed by Google were deployed as a service in the place I remain most familiar with, New York City.
The most likely near-term scenario is that such vehicles would be constructed as a fleet of automated taxicabs, not the more radical and frankly more interesting possibility that the service embracing them would be designed to afford truly public transit. The truth of the matter is that the arrival of the technological capability bound up in these vehicles begins to upend these standing categories…but the world can only accommodate so much novelty at once. The vehicle itself is only one component of an distributed actor-network dedicated to the accomplishment of mobility; when the autonomous vehicle begins to supplant the conventional taxi, that whole network has to restabilize around both the vehicle’s own capabilities and the ways in which those capabilities couple with other, existing actors.
In this case, that means actors like the Taxi and Limousine Commission. Enabling legislation, a body of suitable regulation, a controlling legal authority, the agreement on procedures for assessing liability to calibrate the furnishment of insurance: all of these things will need to be decided upon before any such thing as the automation of surface traffic in New York City can happen. And these provisions have a conservative effect. During the elapse of some arbitrary transitional period, anyway, they’ll tend to drag this theoretically disruptive actor back toward the categories we’re familiar with, the modes in which we’re used to the world working. That period may last months or it may last decades; there’s just no way of knowing ahead of time. But during this interregnum, we’ll approach the new thing through interfaces, metaphors and other linkages we’re already used to.
Automated taxis, as envisioned by designer Petr Kubik.
So. What can we reasonably assert of a driverless car on the Google model, when such a thing is deployed on the streets and known to its riders as a taxi?
On the plus side of the ledger:
– Black men would finally be able to hail a cab in New York City;
– So would people who use wheelchairs, folks carrying bulky packages, and others habitually and summarily bypassed by drivers;
– Sexual harassment of women riding alone would instantly cease to be an issue;
– You’d never have a driver slow as if to pick you up, roll down the window to inquire as to your destination, and only then decide it wasn’t somewhere they felt like taking you. (Yes, this is against the law, but any New Yorker will tell you it happens every damn day of the week);
– Similarly, if you happen to need a cab at 4:30, you’ll be able to catch one — getting stuck in the trenches of shift change would be a thing of the past;
– The eerily smooth ride of continuous algorithmic control will replace the lurching stop-and-go style endemic to the last few generations of NYC drivers, with everything that implies for both fuel efficiency and your ability to keep your lunch down.
These are all very good things, and they’d all be true no matter how banjaxed the service-design implementation turns out to be. (As, let’s face it, it would be: remember that we’re talking about Google here.) But as I’m fond of pointing out, none of these very good things can be had without cost. What does the flipside of the equation look like?
- Most obviously, a full-fleet replacement would immediately zero out some 50,000 jobs — mostly jobs held by immigrants, in an economy with few other decent prospects for their employment. Let’s be clear that these, while not great jobs (shitty hours, no benefits, physical discomfort, occasionally abusive customers), generate a net revenue that averages somewhere around $23/hour, and this at a time when the New York State minimum wage stands at $8/hour. These are jobs that tie families and entire communities together;
– The wholesale replacement of these drivers would eliminate one of the very few remaining contexts in which wealthy New Yorkers encounter recent immigrants and their culture at all;
– Though this is admittedly less of an issue in Manhattan, it does eliminate at least some opportunity for drivers to develop and demonstrate mastery and urban savoir faire;
– It would give Google, an advertising broker, unparalleled insight into the comings and goings of a relatively wealthy cohort of riders, and in general a dataset of enormous and irreplicable value;
– Finally, by displacing alternatives, and over the long term undermining the ecosystem of technical capabilities, human competences and other provisions that undergirds contemporary taxi service, the autonomous taxi would in time tend to bring into being and stabilize the conditions for its own perpetuation, to the exclusion of other ways of doing things that might ultimately be more productive. Of course, you could say precisely the same thing about contemporary taxis — that’s kind of the point I’m trying to make. But we should see these dynamics with clear eyes before jumping in, no?
I’m sure, quite sure, that there are weighting factors I’ve overlooked, perhaps even obvious and significant ones. This isn’t the whole story, or anything like it. There is one broadly observable trend I can’t help but noticing, however, in all the above: the benefits we stand to derive from deploying autonomous vehicles on our streets in this way are all felt in the near or even immediate term, while the costs all tend to be circumstances that only tell in the fullness of time. And we haven’t as a species historically tended to do very well with this pattern, the prime example being our experience of the automobile itself. It’s something to keep in mind.
There’s also something to be gleaned from Google’s decision to throw in their lot with Uber — an organization explicitly oriented toward the demands of the wealthy and boundlessly, even gleefully, corrosive of the public trust. And that is that you shouldn’t set your hopes on any mobility service Google builds on their autonomous-vehicle technology ever being positioned as the public accommodation or public utility it certainly could be. The decision to more tightly integrate Uber into their suite of wayfinding and journey-planning services makes it clear that for Google, the prerogative to maximize return on investment for a very few will always outweigh the interests of the communities in which they operate. And that, too, is something to keep in mind, anytime you hear someone touting all of the ways in which the clean, effortless autotaxi stands to resculpt the city.
If you’ve been reading this blog for any particular length of time, or have tripped across my writing on the Urbanscale site or elsewhere, you’ve probably noticed that I generally insist on discussing the ostensible benefits of urban technology at an unusually granular level. (In fact, I did this just yesterday, in my responses to questions put to me by Korea’s architectural magazine SPACE.) I’ll want to talk about specific locales, devices, instances and deployments, that is, rather than immediately hopping on board with the wide-eyed enthusiasm for generic technical “innovation” in cities that seems near-universal at our moment in history.
My point in doing so is that we can’t really fairly assess a value proposition, or understand the precise nature of the trade-offs bound up in a given deployment of technology, until we see what people make of it in the wild, in a specific locale. The canonical example of the perils that attend the overly generic consideration of a technology is bus rapid transit, or BRT, which works very, very well indeed on sociophysical terrain that strongly resembles its original home of Curitiba, and much less so in low-density environments like Johannesburg, or in places where, for whatever reason, access to the right-of-way can’t be controlled, notably Delhi and New York City. BRT was sold to these latter municipalities as a panacea for problems of urban mobility, without reference to all of the spatial, social, regulatory, pricing-model and service-design elements that had to be brought into balance before anything like success could be declared, and it shows. (Boy howdy, does it show. Have you ridden the New York City MTA’s half-assed instantiation of BRT lately?)
And if anything, information technology is even more sensitively dependent on factors like these. The choice of one touchscreen technology (form factor, operating system, service provider, register of language…) over another very often turns out to determine the success or failure of a given proposition.
But despite all this, sometimes it is possible for the careful observer to suss out the likely future contours of a technology’s adoption, based on a more general appreciation of its nature. And that’s why I want to take a little time today to discuss with you my thinking around the emergent class of low-power, low-range transmitters known as “beacons.”
Classically, of course, a “beacon” was a visually prominent effect of some sort, designed to notify or warn those encountering it of some otherwise indistinct condition or feature in the landscape. And perhaps as originally envisioned, this class of transmitters genuinely was supposed to be what it said on the tin: a simple way for relatively low-powered devices to find and lock onto one another, amid the fog and unpredictable dynamism of the everyday.
This is not a particularly new idea; as long ago as 2005, I’d proposed on my old v-2 site that networked objects would need some lightweight, low-cost way of radiating information about their presence and capabilities to other things (and by extension, people) in the near neighborhood — the foundation of what, at that time, I thought of as a “universal service-discovery layer” draped over the world. And of course I was nowhere near the first to have proposed something along these lines; I myself had been inspired to think more deeply about things talking to each other from a sideways reading of a throw-away bit of cleverness in Bruce Sterling’s 1998 novel Distraction, and it’s fair to say that the idea of things automatically broadcasting their identity to other things had been in the air for quite a few years before that.
But in evolving commercial parlance, beacons are nothing of the sort, really. A contemporary beacon (like these ugly and rather hostile-looking blebs, sold by Estimote) is primarily designed to capture information, not to convey it — and such information as it does convey outward is disproportionately intended to benefit the sender over the recipient. So my first objection to beacon technology is that this very framing is in itself mendacious, dishonest and misleading. (You know you’re in trouble when the very name of something is a lie.)
As things stand now, beacons are intended for one purpose, and one purpose alone: to capture and monetize your behavior. As with the so-called Internet of Things more broadly, there simply aren’t any particularly convincing or compelling use cases for the technology that aren’t about driving needless consumption; almost without exception, those that are even partially robust have to do with closing a commercial transaction. Both the language of beacon technology and the framework of assumptions it grows out of are airlessly, claustrophobically hegemonic, and this thinking is all over their sites: vendors urge you to deploy these “media-rich banner ads for the physical world” in “any physical place, such as your retail store,” to “drive engagement,” “cross-sell and up-sell” and eventually “convert” passersby to purchasers. Even beacon advocates have a hard time coming up with any more than half-hearted art projects by way of uses for the technology that are not founded in the desire to relieve some passing mark of the contents of their wallet, reliably, predictably and on an ongoing basis.
And even those scenarios of use which appear at first blush to be founded in blamelessly humanitarian ends, when subjected to trial by ordeal ultimately turn out to embrace the shabbiest neoliberal reasoning. Cheaper to spackle a subway station with networked microlocation transponders, goes the thinking, than to actually hire and train the (unpredictable, and damnably needy) human beings that might help riders navigate the corridors and interchange nodes. Even if the devices don’t actually turn out to work all that reliably in the fullness of time, or impose a starkly higher TCO than initially estimated, there will be a concrete deployment that someone can point to as an accomplishment, a ticked-off achievement and a justification for renewed budgetary allocation or re-election.
Finally, I find it noteworthy that the beacon cost-benefit proposition can only subsist when it is accomplished stealthily, and when it is presented to citizens forthrightly and transparently, it is just as forthrightly rejected. Perhaps it’s a temporary blip of post-Snowden reticence, but my sense is that most of us have become chary of bundling too many performative dimensions of our identity onto our converged devices at once, and not at all without reason. (Ultimately, I diagnose similar reasons underneath the failure to date of digital wallets and similar device-based payment solutions to gain any market traction whatsoever, though there are other questions at play there as well.)
Beyond and back
The interest in beacons strikes me as being symptomatic of something deeper and more troubling in the culture of technology, something I think of as “the Engelbart overshoot.”
There was a powerful dream that sustained (and not incidentally, justified) half a century’s inquiry into the possibilities of information technology, from Vannevar Bush to Doug Engelbart straight through to Mark Weiser. This was the dream of augmenting the individual human being with instantaneous access to all knowledge, from wherever in the world he or she happened to be standing at any given moment. As toweringly, preposterously ambitious as that goal seems when stated so baldly, it’s hard to conclude anything but that we actually did achieve that dream some time ago, at least as a robust technical proof of concept.
We achieved that dream, and immediately set about betraying it. We betrayed it by shrouding the knowledge it was founded on in bullshit IP law, and by insisting that every interaction with it be pushed through some set of mostly invidious business logic. We betrayed it by building our otherwise astoundingly liberatory propositions around walled gardens and proprietary standards, by putting the prerogatives of rent-seeking ahead of any move to fertilize and renew the commons, and by tolerating the infestation of our informational ecology with vile, value-destroying parasites. These days technical innovators seem more likely to be lauded for devising new ways to harness and exploit people’s life energy for private gain than for the inverse.
In fact, you and I now draw breath in a post-utopian world — a world where the tide of technical idealism has long receded from its high-water mark, where it’s a matter of course to suggest that we must attach (someone’s) networked sensors to our bodies in order to know them, and where, rather astonishingly, it is possible for an intelligent person to argue that spamming the globe with such devices is somehow a precondition of “reclaim[ing our] environment as a place of sociability and creativity.” And this is the world in which beacons and the cause of advocacy for them arise.
There’s very little meaningful for this technology to do — no specifiable aim or goal that genuinely seems to require its deployment, which could not be achieved as or more readily in some other way. As presently constituted, anyway, it doesn’t serve the great dream of aiding us in our lifelong effort to make sense of the endlessly confounding and occasionally dangerous world. It furthers only the puniest and most shaming of ambitions. To the talented, technically capable folks working so hard to build out the beacon world, I ask: Is this really what you want to spend any part of your only life on Earth working to develop? To those advocating this turn, I ask: Can’t you think of any way of relating to people more interesting and productive than trying to sell them something they neither want nor need, and most likely cannot genuinely afford?
It doesn’t take too concerted an intellectual effort to understand what’s really going on with beacons — as a matter of fact, as we’ve seen, most people evidently seem to understand the situation perfectly well already. But I don’t hold out too much hope of getting any of the truly convinced to see the light on this question; we all know how very difficult it can be to get people to understand something when their salary (mortgage payments/kids’ private-school tuition/equity stake/deal flow) depends on them not understanding it. If you ask me, though, we were meant for better things than this.
Hey there! It’s been awhile since I’ve shouted at ya properly, and I’m going to be MIA for just a little longer yet (having stupidly locked myself into back-to-back-to-back-to-back trips to Dublin, Manchester, Aarhus & NYC, and finding myself rather burnt to the ground as a result). In the meantime, I thought I’d give you a brief idea of what I’ve been thinking about lately, and what kinds of questions I’ll be taking up over the next few months.
I’ll warn you from the outset that everything that follows is both speculative, in that it reflects hints, notions and potential trajectories more than fully coherent and robustly worked-out arguments, and overdense, in that it alludes to more lines of thought than I can properly treat at any length you’d tolerate in a blog post. Bear with me anyway and hopefully we’ll get somewhere interesting together.
This year’s model
More than a few of you have asked just what it is that I’m up to here at LSE. My research project is fairly open, but I think it’s fair to describe it as a consideration of the perennial urbanist themes of land use, mobility and governance, as they fold back against an environment and population whose capacities and affordances are increasingly conditioned by the presence of networked computational systems.
Roughly, I’m asking: given the presence of these systems, how might we use them to (a) help allocate common spatial resources in such a way as to ensure the most socially productive use of the available space; (b) underwrite the greatest ability of all to participate personally and physically in all the circuits of exchange that constitute the city; and (c) assist communities in making wiser, more responsive and more widely agreed-upon decisions regarding these and other matters before them? And how do we do all of these things in a way that respects, supports and makes the most use of our existing competences for the city — that skillful negotiation of the world and its prospects that big-city folks have been known for since time out of mind?
Big questions, obviously, and what’s (I hope) equally obvious is that I make no pretense whatsoever of essaying neutral answers to them. With regard to the first of these topics, for example, it ought to be evident that my notions of “most productive use” bear very little resemblance to the argument from revenue-generation potential that furnishes most contemporary redevelopment schemes with their primary justificatory apparatus, and which as of this writing appears to have hollowed out any hope that the so-called “sharing economy” might give rise to radically different ways of working and living together.
As I’ll explain in greater detail below, it’s what happened to the early promise of a networked sharing economy that haunts me as I prepare to propose new configurations for convivial systems. For all the utopian hope that may have attended their arrival, I think by now it’s clear that all too many existing coworking and “maker” spaces orbit venture-financed technology startup culture too closely, badly underfulfilling their potential and reproducing conditions I have no interest in perpetuating. That I can see, they have broadly failed as alternative spaces in which we could shelter from the invidious operations of consumer-phase capital, rediscover some sense of ourselves as skilled and competent agents and reclaim responsibility for the furniture of our world. Meanwhile, other potentially transformative models, like those on which Zipcar and AirBnB are founded, seem to have been placidly, even hungrily absorbed into the extant framework of neoliberal assumption.
Signs, pointers and portents
Readers of “Against the smart city” (in Kindle or POD pamphlet editions) know that I don’t place any particularly great faith in existing institutions’ capacity (or willingness) to address these circumstances. I go into a fair amount of detail, in fact, to spell out just why I think the “smart city” is such a disastrously misguided conception of the role of networked information technology in our urban places and our lives. At the same time, though, I do think it’s incumbent upon anyone levying such a critique to articulate at least some affirmative vision of what they would like to see happen in the world.
So what do I believe more satisfying, more fructifying alternatives might look and feel like? And what do I think are some ways of using networked technologies capable of encouraging conceptions of the relation between self and society that are a little less atomic — that are, in other words, less Californian-ideological and more oriented toward commonwealth?
In the following months, I’ll be sketching out at least the basic contours of a vision of urban living and working that responds to these questions. In particular, I’m interested in elaborating the outlines of a post-growth, near-steady-state industrial permaculture in city centers, autonomously and locally managed, undergirded by networked systems of deliberation, resource stewardship, mobility and exchange. This is a vision of localism in which flows of matter and energy circulate in a carefully-maintained dynamic equilibrium; communities produce most of the things (and skills, and affects) they need to survive in an unstable world; and sensitive onshoring brings compact, clean sites of precision manufacture and production back into the urban fold, undoing the supply chains of continental and oceanic scale and the ludicrous energetic, environmental and human costs they entail. We learn, once again, to work in atoms as well as bits; we do so together; and in doing so, we focus on the creation of real prosperity in the absence of economic growth.
For a variety of reasons, it’s important to me that I ground everything I’ll be proposing in empirical observations of events and situations that have some track record of functioning successfully. As it happens, some hints of what aspects of this vision might look like in practice do crop up in three very different existing projects/processes I’m aware of: Madrid’s Campo de Cebada; the Godsbanen/Institut for (x) complex, in Aarhus, Denmark; and finally a commercial enterprise called Unto This Last right here in London. Each of these sites has something to teach us, and in some ways I think of each of them as a dress rehearsal for a best-case future.
Campo de Cebada: Community control
At el Campo de Cebada, a fenced-off 60,000 sq ft lot in the heart of Madrid — formerly the site of a market, seemingly doomed to persistent vacancy by the economic crisis of 2008 — was reclaimed and transformed into a community resource by the neighborhood’s residents themselves.
After securing physical access, but before anything was built on the lot, a core group of local activists (including members of the Zuloark architectural collective) convened a series of weekly open assemblies, organized on bedrock principles of transparency, openness and participation. Residents and other interested parties were asked to propose, weigh and decide upon the programs, structures and activities the site should support. And so what had been more or less an abandoned site came under autonomous community control, using horizontal, leaderless processes very similar to those that proved so successful in the Occupy movement (including Occupy Sandy, as I describe here). It was under this informal and only retroactively sanctioned process of management that the space finally began to generate meaningful value for its users and neighbors. (At this point it may be worth noting that Spain has a robust history of anarchist practice, though it would also be something of an sublime understatement to point out that Madrid was not historically the heart of this activity.)
Both public assemblies and other, more casual activities on the site notably rely upon rapidly reconfigurable/demountable pallet-based furniture designed by Zuloark, similar to that Raumlabor Berlin has deployed in their pop-up public spaces in the past. (Such furniture also suggests a slow percolation of open-source hardware design and construction schemas like OpenStructures, a central theme of year-before-last’s tremendous Adhocracy show.) But it would be a mistake to identify the lesson of el Campo de Cebada with its physical tokens. Like the community gardens of New York’s Lower East Side, or more recently 596 Acres, what its success suggests is that ordinary, nonspecialist people are more than capable of taking on responsibility for maintenance, deconfliction and the other less glamorous aspects of administering and operating any such site, in the very core of a world city of the long-developed North — and to do so not in response to an environmental shock like Katrina or Sandy, but as a (dare I say “entrepreneurial”) way of grasping the emergent opportunities that lay curled up fractally inside the slower processes of economic calamity.
What the people behind el Campo de Cebada have forged together is, in essence, an Occupation that is affirmative rather than merely critical, productive and forward-looking as well as polemical. What their experience teaches us is that we can reimagine and reconfigure the sacrifice zones left behind by the reigning calculus of land valuation, grasping and making maximum use of them as a collective resource, in a maximally inclusive way.
Godsbanen/Institut for (x): Gradient of engagement
In Aarhus, my host Martin Brynskov took me for a walk around the publicly-funded Godsbanen production space/event venue, and the curious Institut for (x) that partially overlaps it. These institutions occupy a scatter of buildings lying at the end of a decommissioned rail spur that thrusts up into the heart of town, and the hour we spent walking over, around and through them began to suggest a particularly potent hybridization: autonomous self-management in the style of el Campo de Cebada, fused to the provision of standing community workshops and production facilities.
To my eye, anyway, Godsbanen consists of four distinct structures or conditions: the former railyard administration building, now the offices of various public, private and non-profit groups; a long main hall that was formerly the intermodal freight-transfer center, and now shelters the printshop, photo studio, metalshop and so on; a new infill structure (complete with vertiginously climbable roof) by 3XN, that comprises the event venue and canteen, and sinters the other buildings together; and a tumble of trailers, ad-hoc shacks, shade structures and lean-tos that apparently constitute the Institut for (x).
What was wonderful about Godsbanen was seeing men and women both — of all ages, very few of whom were obviously hipsterized — using the available wood-, metal-, clay- and textile-working facilities to make things for their own daily use. It’s this deployment of emergent digital craft techniques to produce things primarily with an eye to their use value rather than their exchange value à la present-day Etsy that so excited me.
But there are other ways in which Godsbanen one-ups the usual makerspace proposition. For example, the site sports a legible gradient of formality and structure, accessible at any point and traversable in either direction; you can literally see the stiff Scandinavian rectitude of the administration building decomposing into particles as you walk further down the rails, with everything that implies for uses and users. Martin pointed out that the complex supports two entirely distinct woodworking shops, one at either end of the gradient: the first (low-cost, but still pay-for-use) furnished with state-of-the-art equipment and on-site assistance, and the other, further down the yard, free but provided with somewhat older equipment and not much in the way of help/oversight. A project could germinate with two or three friends tinkering in the anarchic fringes, and move up the grade as they began to need more budget, order and privacy, or, alternately, a formal enterprise used to the comforts and constraints of the main building might hive off an experimental or exploratory activity requiring the freedom of the fringes. Either way, individual or collective undertakings are able to mature and develop inside a common framework, and avail themselves of more or less structure as needed. This is something that many self-styled incubators attempt, and very few seem to get right.
The further away one walks from the main building, the greater the sense of permission granted by the apparently random distribution of objects around the central space, by the texture of these objects and their orientation. This is of course not at all random: everything you see has been selected with an eye toward a precisely calibrated aesthetic that at times comes perilously close to favela chic, but that does send a very powerful message about the appropriability of the environment, the kinds of things people can do here and the kinds of people who can do them. (Note that this is the same message ostensibly conveyed, but actually undermined, by the “wacky,” infantilized furniture of dot-com and tech-startup offices.)
This aspect of legibility, or performativity, strikes me as being nontrivially important to the success of the Godsbanen project. What fifty or more years of spectacular consumerism have left us with is the need to be seen to be doing what we do, as a performance of self, identity and affiliation. What participation in a place like Institut for (x) gives its user-constituents is a way to achieve that end without it necessarily being commodified. Citizens are making a very deliberate statement by participating here, and being seen to participate: a statement of value that remains outside the register of consumer capitalism, without necessarily being overtly, consciously or uncomplicatedly in opposition to it.
My sense is that Aarhus has figured out something sensitively dependent on a whole lot of boundary conditions — something that municipalities around the planet are falling all over themselves trying to reinvent, and generally missing by a country mile. Their success has something to do, certainly, with the fact that Denmark can find funds in the public purse to support this kind of activity, and just as certainly with the fact that a coherent fabric of trust yet persists in Danish culture of the everyday.
But it owes even more to some very canny spatial and social thinking. What the Aarhus experiment teaches us, among quite a few other things, are how to organize space so its legibility serves its users rather than the prerogatives of territorial control, and that many of the material things we need in life we can learn to make for ourselves.
Unto This Last: Local production, training and employment
Which brings us to Unto This Last, a commercial furniture manufacturer that has been operating in London’s Brick Lane for the past thirteen years. Their product line — a reasonably wide selection of chairs, tables, beds, bookshelves and storage units — displays a total coherence from conception all the way through design, fabrication method and setting to delivery. Each piece has been carefully designed so that it can be assembled from flat pieces cut from sheets of sustainably-grown birch plywood, by a CNC cutter right in the back of the shop. (Swing by at the right time, and you can see it in action, cutting components of the piece that you yourself will take home and weave into your life.) The shop’s ethos of “less mass, more data” rather takes the logistics-friendly Ikea flatpack concept to a new level.
There are, inevitably, issues. While I personally rather like it, it’s clear that the stripped-down aesthetic (ably conveyed by the store’s iconic sign) isn’t for everyone. And ideally trees yielding wood suitable to this kind of application could be grown within the local bioregion, rather than being shipped from the (state-owned and -managed) forests of Latvia.
Nevertheless, alongside other, slightly differing initiatives, like the wonderfully-named Assemble & Join, what Unto This Last teaches us is how to wrest the greatest practical, economic and (as we’ll see) social value from the minimum investment in matter and energy.
In the fusion of each of these three archetypal processes, el Campo de Cebada, Godsbanen and Unto This Last, we can see the outlines of something truly radical and terribly exciting beginning to resolve. What can be made out, gleaming in the darkness, is a — partial, incomplete, necessarily insufficient, but hugely important — way of responding to the disappearance of meaningful jobs from our cities, as well as all the baleful second-order effects that attend that disappearance.
When apologists for the technology industry trumpet the decontextualized factoid that each “tech” job ostensibly creates five new service positions as a secondary effect, what they neglect to mention is that the lion’s share of those jobs will as a matter of course prove to be the kind of insecure, short-term, benefits-lacking, at-or-close-to-minimum-wage positions that typify the contemporary service sector. This sort of employment can’t come anywhere close to the (typically unionized) industrial-sector jobs of the twentieth century in their capacity to bind a community together, either in the income and benefits they produce by way of compensation, in the conception of self and competence they generate in those who hold them, or in the sense of solidarity with others similarly situated that they generally evoke.
At the same time, though, like many others, I too believe it would be foolish to artifically inflate employment by propping up declining smokestack industries with public-sector subsidies. Why, for example, continue to maintain Detroit’s automobile manufacturers on taxpayer-funded life support, when their approach to the world is so deeply retrograde, their product so very corrosive environmentally and socially, their behavior so irresponsible and their management so blitheringly, hamfistedly incompetent? That which is falling should also be pushed, surely. But that can’t ethically be done until something of comparable scale has been found to replace industrial manufacturing jobs as the generator of local economic vitality and the nexus of local community.
So where might meaningful, valued, value-generating employment be found — “employment” in the deepest sense of that word? I have two ways of answering that question:
- In the immediate term, I believe in the material and economic significance of digital fabrication technologies largely using free and open-source plans, deployed in small, clean, city-center workshops, under democratic community control. While these will never remotely be of a scale to replace all the vanished industrial jobs of the past, they offer us at least one favorable prospect those industrial jobs never could: the direct production of items immediately useful and valuable in one’s own life. Should such workshops be organized in such a way as to offer skills training (perhaps for laid-off service-sector workers, elders or at-risk youth), they present a genuinely potent economic and social proposition.
There are provisos. The Surly Urbanist correctly suggests that any positions created in such an endeavor need to be good jobs, i.e. not simply minimum-wage dronework, and my friend Rena Tom also notes that the skills training involved should be something more comprehensive than a simple set of instructions on how to run a CNC milling machine — that any such course of instruction would be most enduringly valuable if it amounted to an apprenticeship first in the manual and only later the numeric working of materials. I also want to be very clear that, per the kind of inclusive decision-making processes used at el Campo de Cebada, such a workshop would have to be something a community itself collectively thinks is worth experimenting with and investing in, not something inflicted upon it by guileless technoutopians from afar.
- In the fullness of time, I believe that the use of relatively high-technology techniques to accomplish not merely the local, autonomous production of everyday objects, furnitures and infrastructures, but their refit and repair, will come to be an economically salient activity in the global North. In this I see a congelation of several existing tendencies, logics or dynamics: the ideologically-driven retreat of the State from responsibility for stewardship of the everyday environment; the accelerating attrition and degradation of the West’s dated and undermaintained infrastructures, and their concomitant need for upgrade or replacement; increasing belief in the desirability of densifying urban infill; the rising awareness in the developed world of jugaad, gambiarra and other cultures of repair, reuse and improvisation; the emergence of fabricator-enabled adaptive upcycling; the circulation of a massive stock of recyclable componentry (in the form of obsolescent structures as well as landfill-bound but effectively nondegradable consumer items), coupled to the emergence of a favorable economics of materials recovery; broader experience with and understanding of networked, horizontal and leaderless organizational structures; the creation of a robust informational commons, including repositories of freely-downloadable specifications; and finally the clear capability of online platforms to facilitate development and sharing of the necessary knowledge, maintain some degree of standardization (or at least harmonization) of practice, suggest sites where citizen repair might constitute a useful intervention, and support processes of democratic decision-making.
On forgetting to slay the dragon
Especially when they’re of industrial grade, the 3D printers, laser cutters, CNC milling machines and other devices involved in digital precision manufacture are highly visible and — if you’ve ever seen one in operation, you know it’s true — coldly glamorous, possessed of the same eerie machinic grace and certainty that makes the flight of quadcopter drones such an uncanny thing to witness. Nor are fabricated things themselves without a certain evocative power. In a dynamic we should all be familiar with by now, and deeply suspicious of, the discrete printed object is often taken as not merely a sign standing for a complex underlying process, but accepted as a unremarkable replacement and stand-in for it. Thus we see an efflorescence of on-demand mall and High Street “fab labs” apparently dedicated to churning out novelty items of puissant symbolism, but little actual utility: personalized busts, complex gear trains that will never be connected to any other mechanism, and similar dead ends and blind alleys.
I certainly do not mean to fetishize the new production. What I do mean to suggest is that we’ve barely taken the measure of these networked, decentralized, distributed technologies of material production as economic and social enablers. The same techniques that generated kipple of the sort I describe above have clearly already transcended the hobbyist stage, having recently been used to rapidly produce and assemble objects of architectural scale and intent. (If anything, this impressive performance was underhyped; as Fred Scharmen points out, the designers/fabricators responsible for the Shanghai development “don’t have press agents, they didn’t make a rendering, they didn’t even post any photos or concepts until after they did it.”)
But neither are the technologies themselves really the point here. In everything I suggest above, the act of production is — comparatively, and for all its many rigors — the trivially easy bit. The challenge isn’t, at all, to propose the deployment of new fabrication technologies, but to deploy them in modes, configurations and assemblages that might effectively resist capture by existing logics of accumulation and exploitation, and bind them into processes generative of lasting and signficant shared value. This is the infinitely harder project of weaving all of these technologies into not merely “sustainable” but actually sustained practices and communities of practice.
My mistake in the past — and, in retrospect, it’s an astonishingly naïve and determinist one — was to think that emergent networked forms of shared resource utilization might in themselves give rise to any particularly liberatory politics of everyday life. Experience has taught me that such notionally transformative frameworks as do arise very readily get appropriated by existing ways of valuing, doing and being; whatever “emancipatory potential” may reside in them swiftly falls before path dependency and the weight of habit, and the gesture as a whole comes to nought.
This is what appears, for the time being anyway, to have fatally undermined the more interesting prospects for conceiving of space as a shared network resource, the cluster of practices I think of as treating “space as a service.” Consider what’s become of my original argument that the companionable coexistence of AirBnB and Couchsurfing.org implied enough space for a (non-corporate but robustly) commercial business model and a fiercely noncommercial service model to subsist side-by-side, even as they brokered access to the same resource: fast-forward three years, and AirBnB looks more and more like a formal branch of the hospitality industry with each passing day, while Couchsurfing has — fumblingly, and much to the chagrin of its original animating community — reinvented itself as a for-profit competitor.
The dynamic here puts me in mind of a thought expressed succinctly by David Harvey in his new, and excellent, book Seventeen Contradictions and the End of Capitalism:
The long history of attempts to create some such alternative (by way of worker cooperatives, autogestion, worker control and more latterly solidarity economies) suggests that this strategy can meet with only limited success…If the aim of these non-capitalistic forms of labor organization is still the production of exchange values, for example, and if the capacity for private persons to appropriate the social power of money remains unchecked, then the associated workers, the solidarity economies and the centrally planned production regimes ultimately either fail or become complicit in their own self-exploitation.
Also sobering is how very often over the past few years “disruptive innovation” in services has been attended by the worst sort of triumphalist douchery on the part of the already-privileged beneficiaries of the ostensible disruption. I think of the tellingly-named Uber, explicitly positioned as an outright celebration of the “self-made” Randian superman’s differential ability to route around urban infrastructural, bureaucratic and regulatory failure, in a world where his social and economic lessers are reduced to relying on defunded, dysfunctional, all-but-dystopian public transit. Uber’s self-serving rhetoric casts any regulation of their service as unwonted friction imposed by meddlesome rent-seekers, when that fabric of regulation was for the most part woven into place for good and sufficient reason.
As if these disappointments weren’t enough to chasten me from making assertions about propensities and likelihoods, not too long ago Anil Bawa-Cavia (rightly, I think) poked back at something I’d said regarding the “latent and unrealized emancipatory potential” of certain technologies:
I don’t see any reason to believe that any technology has a pre-inscribed ‘potential’ that remains latent within it. I agree with Harman’s interpretation of Latour on this point, extreme as it may be. Either entities have active affinities and relations or they don’t. I see no convincing reason to believe they possess an essence in which potential may reside. So can networked technology be emancipatory? I’d like to believe so, but only acting in relation with other actors in a co-ordinated manner…I don’t [therefore] think it’s constructive to simply assert that this potential is latent, as it amounts to an ideological projection or political posturing. The task, then, would be to go ahead and activate these technologies by bringing them in relation to other actants in ways which might be regarded as emancipatory.
Here the terms of what might at first blush appear to be an abstruse debate in the metaphysics of the flat ontology turn out to have important implications for the ways in which we see, describe and act in the world. Though for myself I tend to believe that all things have recourse to a broader performative repertoire than that set of relations currently enacted, I take Anil’s (and Harman’s, and more distantly Latour’s) point: we have to actually do the work of forging some linkage between things before we can know whether that particular linkage was in fact possible. And that work is an investment, is never accomplished without some cost.
So for all of these reasons, I’ve become wary of using that word “potential” to express my hope for the trajectories that appear to me to be latent in some emergent technosocial circumstance, but have yet to be actualized. But history nevertheless suggests that there is a marked degree of affinity between practices of material production in distributed, networked workshops, on the one hand, and polities choosing to organize themselves as a federation of autonomous local collectives managed by popular assembly on the other. If the latter seems in any wise to be a productive way of addressing some of the more vexatious challenges that afflict us, then maybe it might not be such a bad idea to experiment with the former. (Murray Bookchin gives some consideration to the organic politics of the materially self-reliant, in contexts that include medieval northern Italy and post-Colonial New England, in The Rise of Urbanization and the Decline of Citizenship, which I recommend without reservation.)
Given the direct and ancillary benefits that seem likely to cascade off of locating material production capabilities of this sort in the community, it might not be such a bad idea to experiment with them in any event, regardless of your politics. My aim, in all cases, is to see if the binding power of the network can’t be used to perform a kind of urban kintsugi: Expose the seams and sutures between things, articulate those seams in such a way as to improve the whole, leave the newly-rejoined fabric stronger than it had been before. What lies ahead is the costful task of attempting to verify whether this can in fact be accomplished — whether the value I suppose to subsist in this particular imagined alignment of technologies, spatial arrangements and organizational structures can actually be realized, by helping to produce real-world circumstances and situations that demonstrate it. And while there are certainly enough daunting aspects to this endeavor, and more than enough, I’ve rarely in my adult life been more optimistic than I find myself at this moment. It is clear to me that what we now have at hand, and ready to hand, are practices of the minimum viable utopia.
UPDATE: Event confirmed for 14th March, 2014. See the final post.
For the past half-decade or so, in a phenomenon most everyone reading this site is no doubt already intimately acquainted with, data-derived artifacts (dynamic visualizations, digital maps, interactive representations of place-specific information, even static “infographics”) have taken increasing prominence in the visual imaginary of mass culture.
We see such images all the time now: broadly speaking, the visual rhetoric associated with them is the animating stuff of everything from car commercials to the weather forecast. The same rhetoric breathes life into election and sports coverage on television, the title sequences of movies, viral Facebook posts and the interactive features on newspaper sites.
Sometimes — in fact, often — these images are deployed as abstract tokens, empty fetishes of futurity, tech-ness, data-ness, evidence-basedness…ultimately, au-courantness. Just as often, and very problematically, they’re used to “prove” things.
But we’ve also begun to see the first inklings of ways in which such artifacts can be used more interestingly, to open up rather than shut down collective discussion around issues of great popular import — to ask its users to consider how and why the state of affairs represented by a given visualization got to be that way, whether that state of affairs is at all OK with them, and what if anything ought to be done to redress it. And this is whether the topic at hand happens to be land use, urban renewal and gentrification, informal housing, the differential consequences of public and privatized mass transit or expenditures in the criminal justice system.
Very few methods of advocacy can convey the consequences of our collective decisions as viscerally as a soundly-designed visualization. (Similarly, if there’s a better way of helping people imagine the spatial implications of alternative policy directions, strategies, investments and allocations, I haven’t stumbled onto it yet, although that certainly blurs the distinction between representing that which does exist and simulating that which does not.) What would happen if such visualizations were consciously and explicitly used as the ground text and point of departure for a moderated deliberative process? Could democracy be done this way? Could this be done at regular intervals? And how might doing so lead to better outcomes (or simply more buy-in) than existing procedures?
There’s plenty of rough precedent for such a notion, albeit scattered across a few different registers of activity:
– A few savvy journalists are starting to use data-based visualizations and maps as the starting point for their more traditional investigative efforts, and the narratives built on them. Visualizations, in this mode, essentially allow unexpected correlations and fact patterns to rise to the surface of awareness, and suggest what questions it might therefore be fruitful for a reporter to ask.
– SeeClickFix, of course, already allows citizens to levy demands on local government bodies, though it doesn’t provide for the organization of autonomous response to the conditions it documents, and it forthrightly positions the objects it represents as problems rather than matters of concern. More proactive and affirmative in its framing is Change By Us, which does emphasize voluntarism, though still with a sense of supplication to (elected or appointed) representatives in government. (The site answers the question “Who’s listening?” by promising that a “network of city leaders is ready to hear your ideas and provide guidance for your projects.”) In any event, both SeeClickFix and Change By Us focus on highly granular, literally pothole- or at most community-garden-scale issues.
– Storefront Democracy, a student project of Kristin Gräfe and (ex-Urbanscaler) Jeff Kirsch, reimagined the front window of a city councillor’s district office as a site where community sentiment on various questions, expressed as votes, could be visualized. Voting is not quite the same thing as democracy, much less deliberation, but the project began to explore ways in which situated representations might be used to catalyze conversations about matters facing the community.
– There are even full-blown technological platforms that promise to enable robust networked democracy, though for all the technology involved this one at least seems to blow right by the potential of visualized states of affairs to serve as focal points for managed dissensus.
Draw out all of those threads, and what do you wind up with? I’m not at all sure, but the question is certainly provocative enough that I want to explore its implications in further depth and detail. Again, I’m interested in digital cartography and interactive representations of data used as the starting point, rather than the product and culmination, of a decision process. My intention is to disturb these things as settled facts, disinter them from the loam of zeitgeisty but near-meaningless infoporn that furnishes more than one glossy coffee-table book, and activate them instead as situated social objects. I think by now it’s clear that data-driven projects like Digital Matatus can furnish people with practical tools to manage the way things are in the city. But can they usefully catalyze conversation about the way things could (or should) be? And can we somehow bundle information about provenance into every representation of data, allowing users to ask how it was gathered, by whom, using what means and for what notional purpose, so they can arrive at their own determinations of its reliability and relevance? All of that remains to be seen.
If you find yourself nodding at any of this — or, indeed, you think it’s all deeply misguided, but nevertheless worth contesting in person — consider this a heads-up that I’ll be convening a one-day seminar on this and related topics at LSE in mid-March, and am looking for qualified speakers beyond my personal orbit and existing friendship circles. If you’re interested in either attending or speaking, please do email me at your earliest convenience at my first initial dot my last name at lse.ac.uk. Limited travel support is available – I have an event budget that allows me to fly in two to three speakers and put you up in Central London for a night, so if you or someone you know is inclined to present I definitely encourage you to get in touch. And let’s see if together we can’t figure out if there’s a thing here or not.
Now that we’re finally slouching toward Amazon to be born — i.e. I’m confident that the Kindle edition, at least, will ship within the next ten days — I’m happy to be able to share this final bibliography for “Against the smart city.” I hope, as ever, you find it useful.
Alcatel-Lucent Corporation. “Getting Smart About Smart Cities: Understanding the Market Opportunity in the Cities of Tomorrow,” February 2012.
Alexander, Steve. “IBM wants Minneapolis to become a ‘smarter city,'” Minneapolis Star Tribune, 6 June 2011.
Allease, Eve. “Abu Dhabi, United Arab Emirates: Future Green City Now,” Urban Times, 22 May 2011.
Allianz Open Knowledge Initiative. “Masdar City: a desert utopia,” 30 March 2009.
Alusi, Annissa, Robert G. Eccles, Amy C. Edmondson and Tiona Zuzul. “Sustainable Cities: Oxymoron or the Shape of the Future?,” Harvard Business School Working Paper 11-062, 20 March 2011.
Amnesty International. “Amnesty International Report 2008: Americas Regional Update. Selected events covering the period from January to April 2008,” 28 May 2008.
— “‘We have come to take your souls': the caveirão and policing in Rio de Janeiro,” 13 March 2006.
Android Open Source Project. “Licenses.”
Beer, Stafford. Platform for Change: A Message from Stafford Beer. Wiley, New York, 1975.
Bell, Genevieve and Paul Dourish. “Yesterday’s Tomorrows: Notes on Ubiquitous Computing’s Dominant Vision,” Personal and Ubiquitous Computing Volume 11 Issue 2, January 2007.
Bettencourt, Luís M.A., et al. “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences, Volume 104 Number 17, 24 April 2007.
Biddle, Sam. “Racial Profiling: Newest Trend in Silicon Valley?,” Valleywag, 7 August 2013.
Black, Edwin. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation, Random House, New York, 2001.
Boudreau, John. “Cisco helps build prototype for instant cities,” San Jose Mercury News, 01 June 2010.
Brand, Stewart. How Buildings Learn, Viking Press, New York, 1994.
Brewster, Kent. “Profiling Atherton,” July 2013.
Buro Happold. “Projects: PlanIT Valley.”
Carlisle, Tamsin. “Masdar City clips another $2.5bn from price tag,” The National, 1 December 2010.
Chalmers, Matthew and Ian MacColl. “Seamful and Seamless Design in Ubiquitous Computing,” Technical Report Equator-03-005, 2004.
Chen, David W. “Survey Raises Questions on Data-Driven Policy,” The New York Times, 8 February 2010.
Chomsky, Noam. “The Case Against B.F. Skinner,” New York Review of Books, 30 December 1971.
Cisco Systems. “Cisco and Lake Nona Unite to Create First U.S. Iconic Smart+Connected Community in Orlando, Florida,” 24 October 2012.
— “Cisco Contributes to Open Source.”
— “Cities of the Future: Songdo, South Korea,” 2012.
— “Cities of the Future: Songdo, South Korea – Living,” 2012.
— “Cities of the Future: Songdo, South Korea – Roadmap for a New Community,” 2012.
— “Smart City Framework: A Systematic Process for Enabling Smart+Connected Communities,” September 2012.
— “Smart+Connected Communities.”
City Mayors Foundation. “Largest cities in the world ranked by population density,” 2007.
City Protocol Society. “City Protocol.”
Cohen, Boyd. “Singapore Is On Its Way To Becoming An Iconic Smart City,” Fast Company co.Exist, 14 May 2012.
Le Corbusier. The Athens Charter, Grossman Publishers, New York, 1973.
— La Ville Radieuse. Editions Vincent, Freal & Co., Paris, 1935.
Cotton, Brian (“Ph.D.”!) for Frost & Sullivan. “Intelligent Urban Transportation: Predicting, Managing, and Integrating Traffic Operations in Smarter Cities.
CSIR-Central Road Research Institute. “Evaluating Bus Rapid Transit (BRT) Corridor Performance from Amebedkar Nagar to Mool Chand Intersection,” 13 February 2013.
Davis, Mike and Daniel Bertrand Monk. Evil Paradises: Dreamworlds of Neoliberalism, The New Press, New York, 2008.
De la Peña, Benjamin. “Embracing the Autocatalytic City,” The Atlantic Cities, 11 March 2013.
Deleuze, Gilles. Cinema 1: The Movement-Image, Athlone Press, London, 1986.
Dourish, Paul. Where the Action Is: The Foundations of Embodied Interaction, The MIT Press, Cambridge MA, 2004.
The Economist. “Masdar plan,” Technology Quarterly, 4 December 2008.
Economist Intelligence Unit for Siemens AG. “Managing the city as a ‘living organism,'” Asian Green City Index, 2011.
Emirates Center for Human Rights. “Migrant workers in the United Arab Emirates,” July 2012.
English, Bella. “He’ll Build This City,” Boston Globe, 13 December 2004.
Executive Affairs Authority, Emirate of Abu Dhabi. “Law No. 22: Establishment of Abu Dhabi Future Energy Company and Masdar Institute of Science and Technology,” 2007.
EXP, Research Centre for Experimental Practice at the University of Westminster. “Archigram Archival Project.”
Feuer, Alan. “Occupy Sandy: A Movement Moves to Relief,” The New York Times, 9 November 2012.
Flood, Joe. The Fires: How a Computer Formula Burned Down New York City — And Determined the Future of American Cities. Riverhead Books, New York, 2010.
Forrester, Jay. Urban Dynamics, The MIT Press, Cambridge MA, 1969.
Frayssinet, Fabiana. “Forced Eviction from Rio’s Slums Echoes Dark Past,” Tierramerica, 10 May 2010.
Galbraith, Jay R. “Matrix organization designs: How to combine functional and project forms,” Business Horizons Volume 14 Issue 1, 1971.
Gartner, Inc. “Is ‘Smart Cities’ The Next Big Market?,” 2010.
Gehl, Jan. Life Between Buildings: Using Public Space, trans. Jo Koch, Van Nostrand Reinhold, New York, 1987.
Gibson, William. Neuromancer, Ace Books, New York, 1984.
Green, Jeremy for OVUM. “Digital Urban Renewal,” April 2011.
Greenfield, Adam. “Preliminary Notes to a Diagram of Occupy Sandy,” Speedbird, 21 November 2012.
— Everyware: The dawning age of ubiquitous computing, New Riders, Berkeley, 2006.
Gunther, Marc. “A Photo Tour of Masdar City,” Greenbiz.com, 21 January 2011.
Hatch, David. “Singapore Strives to Become ‘The Smartest City,'” Governing, February 2013.
Hedlund, Jan for Microsoft Corporation. “Smart City 2020: Technology and Society in the Modern City,” March 2011.
Hitachi, Ltd. “Coordination of Urban and Service Infrastructures for Smart Cities,” 2012.
— “Telecommunication Systems for Realizing a Smart City,” 2012.
Howard, Ebenezer. Garden Cities of To-morrow, Faber and Faber, London, 1902.
Human Rights Watch. “World Report 2012: United Arab Emirates,” January 2012.
IBM Corporation. “City Government and IBM Close Partnership to Make Rio de Janeiro a Smarter City,” 27 December 2010.
— “IBM Intelligent Operations Center for Smarter Cities.”
— “Intelligent Operations Center,” 6 March 2012.
— Advertisement: “Mayors Of The World, May We Kindly Have 540 Words With You?”
— “The Smarter City: Traffic.”
— “Smarter Cities: Infrastructure. Operations. People.”
— “Smarter Public Safety: Smarter Cities solutions for law enforcement.”
— “Traffic Prediction Tool.”
— “Welcome to the Smarter City.”
Incheon Free Economic Zone Authority. “Business Outline: Development Plan.”
— “Incheon Free Economic Zone: One-Stop Service.”
— “Investment Incentive, Incheon Free Economic Zone.”
International Data Corporation. “Worldwide Quarterly Enterprise Networks Tracker: Top Five Worldwide Layer 2/3 Ethernet Switch Vendors,” 23 August 2012.
International Telecommunication Union. “Living In a World of 7 Billion People: Digital Cities for a Better Future,” ITU News, August 2011.
— “The World in 2013: ICT Facts and Figures,” February 2013.
Jacobs, Jane. The Death and Life of Great American Cities, Random House, New York, 1961.
Kim, Bongsu. “Subway CCTV was used to watch citizens’ bare skin sneakily,” Asian Business Daily, 16 July 2013. (In Korean.)
Kitchin, Rob and Martin Dodge. Code/Space, The MIT Press, Cambridge MA, 2011.
Koetsier, John. “Cisco helps build first U.S. ‘Smart+Connected’ city of the future in Lake Nona, Florida,” VentureBeat, 23 October 2012.
Kolesar, Peter. “Model for Predicting Average Fire Company Travel Times,” RAND Institute report R-1624-NYC, June 1975.
Koolhaas, Rem. “The Generic City” in S, M, L, XL, The Monacelli Press, New York, 1994.
Lee, Junho and Jeehyun Oh. “New Songdo City and the Value of Flexibility: A Case Study of Implementation and Analysis of a Mega-Scale Project,” MS thesis Massachusetts Institute of Technology, 2008.
LG Electronics. “LG HomNet: Total Solution.”
Lindsay, Greg. “Building a Smarter Favela: IBM Signs Up Rio,” Fast Company, 27 December 2010.
Living PlanIT. Video: “Building efficient urban-scale environments.”
— “Cities in the Cloud: A Living PlanIT Introduction to Future City Technologies,” July 2011.
— “Design Wins.”
— “Living PlanIT at Cisco C-Scape,” July 2011.
– “Living PlanIT’s CEO Steve Lewis selected by the World Economic Forum as a Technology Pioneer 2012.”
– “Living PlanIT Urban Operating System: Introduction to the Living PlanIT UOS Architecture, Open Standards and Protocols.”
— “Planit [sic] Valley, a true innovation in urban development.”
— “Urban Operating System: Overview.”
— “What is Living PlanIT?”
— “Why Become a Living PlanIT Partner Company?”
Masdar (Abu Dhabi Future Energy Company). “Benefits of Setting Up in a Free Zone.”
Masdar City. “Frequently Asked Questions,” 2011.
— “Masdar City: The Global Center of Future Energy,” 2011.
McCullough, Malcolm. Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing, The MIT Press, Cambridge MA, 2004.
Medina, Eden. “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile,” Journal of Latin American Studies Volume 38 Issue 3, 2006.
Mehta, Suketu. “In the Violent Favelas of Brazil,” New York Review of Books, 11 July 2013.
Microsoft Corporation. “Microsoft and Living PlanIT Partner to Deliver Smart City Technology Via the Cloud,” 22 March 2011.
— “The Smart City: Using IT to Make Cities More Livable,” December 2011.
Migurski, Michal. “Visualizing Urban Data,” in Beautiful Data: The Stories Behind Elegant Data Solutions, Toby Segaran and Jeff Hammerbacher, eds., O’Reilly Media, Sebastopol CA, 2012, pp. 167-182.
— “Oakland Crime Maps X,” tecznotes, 3 March 2008.
Mitleton-Kelly, Eve. “Ten Principles of Complexity & Enabling Infrastructures,” Complex systems and evolutionary perspectives on organisations: the application of complexity theory to organisations, Elsevier Science Ltd, Oxford, 2003.
Mlot, Stephanie. “Microsoft CityNext Aims To Build ‘Smart Cities’,” PC Magazine, 11 July 2013.
Montavon, Marylène, Koen Steemers, Vicky Cheng and Raphaël Compagnon. “‘La Ville Radieuse’ by Le Corbusier once again a case study,” The 23rd Conference on Passive and Low Energy Architecture, 6 September 2006.
Mostashari, Ali, Friedrich Arnold, Mo Mansouri and Matthias Finger. “Cognitive cities and intelligent urban governance,” Network Industries Quarterly Volume 13 Number 3, 2011.
Mumford, Eric. The CIAM Discourse on Urbanism, 1928-1960, The MIT Press, Cambridge MA, 2002.
Mumford, Lewis. The City in History: Its Origins, Its Transformations, and Its Prospects, Harcourt, Brace and World, New York, 1961.
Newman, Oscar. Creating Defensible Space, US Department of Housing and Urban Development Office of Policy Development and Research, Washington DC, 1996.
OFFICE: Jason Schulte Design, Inc. “IBM: Designing a Smarter Planet.”
Patten, Bob. “Standard operating procedures in Intelligent Operations Center Version 1.5,” IBM developerWorks, 10 May 2013.
Paul-Ebhohimhen, Virginia A. and Alison Avenell. “Systematic review of the use of financial incentives in treatments for obesity and overweight,” Obesity Reviews Volume 9 Issue 4, July 2008.
Perrow, Charles. Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984.
Poole, Erika Shehan, Christopher A. Le Dantec, James R. Eagan and W. Keith Edwards. “Reflecting on the invisible: understanding end-user perceptions of ubiquitous computing,” Proceedings of Ubicomp ’08, Volume 344, ACM, New York, 2008.
Quigley, John M. “Urban diversity and economic growth,” Journal of Economic Perspectives Volume 12 Number 2, 1998.
Reporters Without Borders. “Authorities crack down on social networks and activist bloggers,” 30 March 2012.
Rider, Kenneth L. “A Parametric Model for the Allocation of Fire Companies,” RAND Institute report R-1615-NYC/HUD, April 1975.
Riis, Jacob. How the Other Half Lives: Studies among the Tenements of New York, Charles Scribner’s Sons, New York, 1890.
Ross, Andrew. “Human Rights, Academic Freedom, and Offshore Academics,” Academe, January 2011.
Roudman, Sam. “Bank of America’s Toxic Tower,” The New Republic, 28 July 2013.
Rudofsky, Bernard. Streets for People, Doubleday, New York, 1969.
Sadler, Simon. Archigram: Architecture without Architecture, The MIT Press, Cambridge MA, 2005.
Sen, Amartya Kumar. “The impossibility of a Paretian liberal,” Journal of Political Economy Volume 78 Number 1, January 1970.
Schmidt, Harald, Kristin Voigt and Daniel Wikler. “Carrots, sticks, and health care reform: problems with wellness incentives,” New England Journal of Medicine, 4 January 2010.
Schneider, Friedrich, Andreas Buehn and Claudio E. Montenegro. “Shadow Economies All over the World,” World Bank Policy Research Working Paper 5356, July 2010.
Scott, James C. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, Yale University Press, New Haven, 1999.
Segel, Arthur I. “New Songdo City,” Harvard Business School case study 9-206-019, 15 June 2006.
Siemens Corporation. “Collective Intelligence: City Cockpit, Real-Time Government,” 2011.
— “Smart City in detail: Intelligent communication solutions for smart cities.”
— “Sustainable Buildings — Networked Technologies: Smart Homes and Cities,” 2008.
— “What is the Siemens City of the Future?,” 2012.
Simon, David, Kia Corthron, Ed Burns and Chris Collins. The Wire, Season 4, Episode 9: “Know Your Place,” first aired 12 November 2006.
Smith, P.D. City: A Guidebook for the Urban Age, Bloomsbury Publishing USA, New York, 2012.
Songdo IBD. “The City: Master Plan.”
Sönmez, Sevil, Yorghos Apostopoulos, Diane Tran and Shantyana Rentrope. “Human rights and health disparities for migrant workers in the UAE,” Health and Human Rights Volume 13 Number 2, 2011.
Spufford, Francis. Red Plenty, Faber and Faber, London, 2011.
Stallman, Richard. “Why ‘Open Source’ misses the point of Free Software,” Communications of the ACM, Volume 52 Issue 6, June 2009.
Sussman, Joseph M. “Collected views on complexity in systems,” MIT Engineering Systems Division Working Paper Series, 30 April 2002.
Tuan, Yi-Fu. Space and Place: The Perspective of Experience, University of Minnesota Press, Minneapolis, 1977.
Ubisoft. Watch Dogs.
Vidal, John. “Masdar City – a glimpse of the future in the desert,” The Guardian, 26 April 2011.
Wakefield, Jane. “Building cities of the future now,” BBC News, 21 February 2013.
Webb, Flemmich. “Sustainable cities: Innovative urban planning in Singapore,” The Guardian, 11 October 2012.
Weiser, Mark. “Creating the invisible interface.” ACM Conference on User Interface Software and Technology (UIST94), 1994.
The White House, Office of the Press Secretary. “President Clinton: Improving the Civilian Global Positioning System (GPS),” 1 May 2000.
Whyte, William H. The Social Life of Small Urban Spaces, Project for Public Spaces, New York, 1980.
Wilken, Rowan. “Calculated Uncertainty: Computers, Chance Encounters, and ‘Community’ in the Work of Cedric Price,” Transformations Issue 14, March 2007.
Woods, Eric. “PlanIT Valley: A Blueprint for the Smart City,” Matter Network, 31 March 2011.
Woyke, Elizabeth. “Very Smart Cities,” Forbes, 3 September 2009.
The following is section 4 of “Against the smart city,” the first part of The City Is Here For You To Use. Our Do projects will be publishing “Against the smart city” in stand-alone POD pamphlet and Kindle editions later on this month.
4 | The smart city pretends to an objectivity, a unity and a perfect knowledge that are nowhere achievable, even in principle.
Of the major technology vendors working in the field, Siemens makes the strongest and most explicit statement of the philosophical underpinnings on which their (and indeed the entire) smart-city enterprise is founded: “Several decades from now cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service…The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”
We’ve already considered what kind of ideological work is being done when efforts like these are positioned as taking place in some proximate future. The claim of perfect competence Siemens makes for its autonomous IT systems, though, is by far the more important part of the passage. It reflects a clear philosophical position, and while this position is more forthrightly articulated here than it is anywhere else in the smart-city literature, it is without question latent in the work of IBM, Cisco and their peers. Given its foundational importance to the smart-city value proposition, I believe it’s worth unpacking in some detail.
What we encounter in this statement is an unreconstructed logical positivism, which, among other things, implicitly holds that the world is in principle perfectly knowable, its contents enumerable, and their relations capable of being meaningfully encoded in the state of a technical system, without bias or distortion. As applied to the affairs of cities, it is effectively an argument there is one and only one universal and transcendently correct solution to each identified individual or collective human need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something which can be encoded in public policy, again without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)
Every single aspect of this argument is problematic.
— Perfectly knowable, without bias or distortion: Collectively, we’ve known since Heisenberg that to observe the behavior of a system is to intervene in it. Even in principle, there is no way to stand outside a system and take a snapshot of it as it existed at time T.
But it’s not as if any of us enjoy the luxury of living in principle. We act in historical space and time, as do the technological systems we devise and enlist as our surrogates and extensions. So when Siemens talks about a city’s autonomous systems acting on “perfect knowledge” of residents’ habits and behaviors, what they are suggesting in the first place is that everything those residents ever do — whether in public, or in spaces and settings formerly thought of as private — can be sensed accurately, raised to the network without loss, and submitted to the consideration of some system capable of interpreting it appropriately. And furthermore, that all of these efforts can somehow, by means unspecified, avoid being skewed by the entropy, error and contingency that mark everything else that transpires inside history.
Some skepticism regarding this scenario would certainly be understandable. It’s hard to see how Siemens, or anybody else, could avoid the slippage that’s bound to occur at every step of this process, even under the most favorable circumstances imaginable.
However thoroughly Siemens may deploy their sensors, to start with, they’ll only ever capture the qualities about the world that are amenable to capture, measure only those quantities that can be measured. Let’s stipulate, for the moment, that these sensing mechanisms somehow operate flawlessly, and in perpetuity. What if information crucial to the formulation of sound civic policy is somehow absent from their soundings, resides in the space between them, or is derived from the interaction between whatever quality of the world we set out to measure and our corporeal experience of it?
Other distortions may creep into the quantification of urban processes. Actors whose performance is subject to measurement may consciously adapt their behavior to produce metrics favorable to them in one way or another. For example, a police officer under pressure to “make quota” may issue citations for infractions she would ordinarily overlook; conversely, her precinct commander, squeezed by City Hall to present the city as an ever-safer haven for investment, may downwardly classify felony assault as a simple misdemeanor. This is the phenomenon known to viewers of The Wire as “juking the stats,” and it’s particularly likely to happen when financial or other incentives are contingent on achieving some nominal performance threshold. Nor is it the only factor likely to skew the act of data collection; long, sad experience suggests that the usual array of all-too-human pressures will continue to condition any such effort. (Consider the recent case in which Seoul Metro operators were charged with using CCTV cameras to surreptitiously ogle women passengers, rather than scan platforms and cars for criminal activity as intended.)
What about those human behaviors, and they are many, that we may for whatever reason wish to hide, dissemble, disguise, or otherwise prevent being disclosed to the surveillant systems all around us? “Perfect knowledge,” by definition, implies either that no such attempts at obfuscation will be made, or that any and all such attempts will remain fruitless. Neither one of these circumstances sounds very much like any city I’m familiar with, or, for that matter, would want to be.
And what about the question of interpretation? The Siemens scenario amounts to a bizarre compound assertion that each of our acts has a single salient meaning, which is always and invariably straightforwardly self-evident — in fact, so much so that this meaning can be recognized, made sense of and acted upon remotely, by a machinic system, without any possibility of mistaken appraisal.
The most prominent advocates of this approach appear to believe that the contingency of data capture is not an issue, nor is any particular act of interpretation involved in making use of whatever data is retrieved from the world in this way. When discussing their own smart-city venture, senior IBM executives argue, in so many words, that “the data is the data”: transcendent, limpid and uncompromised by human frailty. This mystification of “the data” goes unremarked upon and unchallenged not merely in IBM’s material, but in the overwhelming majority of discussions of the smart city. But different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few meters. Perceptions of risk in a neighborhood can be transformed by altering the taxonomy used to classify reported crimes ever so slightly. And anyone who’s ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey. The fact is that the data is never “just” the data, and to assert otherwise is to lend inherently political and interested decisions regarding the act of data collection an unwonted gloss of neutrality and dispassionate scientific objectivity.
The bold claim of perfect knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it’s astonishing that anyone would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful.
— One and only one solution: With their inherent, definitional diversity, layeredness and complexity, we can usefully think of cities as tragic. As individuals and communities, the people who live in them hold to multiple competing and equally valid conceptions of the good, and it’s impossible to fully satisfy all of them at the same time. A wavefront of gentrification can open up exciting new opportunities for young homesteaders, small retailers and craft producers, but tends to displace the very people who’d given a neighborhood its character and identity. An increased police presence on the streets of a district reassures some residents, but makes others uneasy, and puts yet others at definable risk. Even something as seemingly straightforward and honorable as an anticorruption initiative can undo a fabric of relations that offered the otherwise voiceless at least some access to local power. We should know by now that there are and can be no Pareto-optimal solutions for any system as complex as a city.
— Arrived at algorithmically: Assume, for the sake of argument, that there could be such a solution, a master formula capable of resolving all resource-allocation conflicts and balancing the needs of all a city’s competing constituencies. It certainly would be convenient if this golden mean could be determined automatically and consistently, via the application of a set procedure — in a word, algorithmically.
In urban planning, the idea that certain kinds of challenges are susceptible to algorithmic resolution has a long pedigree. It’s already present in the Corbusian doctrine that the ideal and correct ratio of spatial provisioning in a city can be calculated from nothing more than an enumeration of the population, it underpins the complex composite indices of Jay Forrester’s 1969 Urban Dynamics, and it lay at the heart of the RAND Corporation’s (eventually disastrous) intervention in the management of 1970s New York City. No doubt part of the idea’s appeal to smart-city advocates, too, is the familial resemblance such an algorithm would bear to the formulae by which commercial real-estate developers calculate air rights, the land area that must be reserved for parking in a community of a given size, and so on.
In the right context, at the appropriate scale, such tools are surely useful. But the wholesale surrender of municipal management to an algorithmic toolset — for that is surely what is implied by the word “autonomous” — would seem to repose an undue amount of trust in the party responsible for authoring the algorithm. At least, if the formulae at the heart of the Siemens scenario turn out to be anything at all like the ones used in the current generation of computational models, critical, life-altering decisions will hinge on the interaction of poorly-defined and surprisingly subjective values: a “quality of life” metric, a vague category of “supercreative” occupations, or other idiosyncrasies along these lines. The output generated by such a procedure may turn on half-clever abstractions, in which a complex circumstance resistant to direct measurement is represented by the manipulation of some more easily-determined proxy value: average walking speed stands in for the more inchoate “pace” of urban life, while the number of patent applications constitutes an index of “innovation.”
Even beyond whatever doubts we may harbor as to the ability of algorithms constructed in this way to capture urban dynamics with any sensitivity, the element of the arbitrary we see here should give us pause. Given the significant scope for discretion in defining the variables on which any such thing is founded, we need to understand that the authorship of an algorithm intended to guide the distribution of civic resources is itself an inherently political act. And at least as things stand today, neither in the Siemens material nor anywhere else in the smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.
— Encoded in public policy, and applied transparently, dispassionately and in a manner free from politics: A review of the relevant history suggests that policy recommendations derived from computational models are only rarely applied to questions as politically sensitive as resource allocation without some intermediate tuning taking place. Inconvenient results may be suppressed, arbitrarily overridden by more heavily-weighted decision factors, or simply ignored.
The best-documented example of this tendency remains the work of the New York City-RAND Institute, explicitly chartered to implant in the governance of New York City “the kind of streamlined, modern management that Robert McNamara applied in the Pentagon with such success” during his tenure as Secretary of Defense (1961-1968). The statistics-driven approach that McNamara’s Whiz Kids had so famously brought to the prosecution of the war in Vietnam, variously thought of as “systems analysis” or “operations research,” was first applied to New York in a series of studies conducted between 1973 and 1975, in which RAND used FDNY incident response-time data to determine the optimal distribution of fire stations.
Methodological flaws undermined the effort from the outset. RAND, for simplicity’s sake, chose to use the time a company arrived at the scene of a fire as the basis of their model, rather than the time at which that company actually began fighting the fire; somewhat unbelievably, for anyone with the slightest familiarity with New York City, RAND’s analysts then compounded their error by refusing to acknowledge traffic as a factor in response time. Again, we see some easily-measured value used as a proxy for a reality that is harder to quantify, and again we see the distortion of ostensibly neutral results by the choices made by an algorithm’s designers. But the more enduring lesson for proponents of data-driven policy has to do with how the study’s results were applied. Despite the mantle of coolly “objective” scientism that systems analysis preferred to wrap itself in, RAND’s final recommendations bowed to factionalism within the Fire Department, as well as the departmental leadership’s need to placate critical external constituencies; the exercise, in other words, turned out to be nothing if not political.
The consequences of RAND’s intervention were catastrophic. Following their recommendations, fire battalions in some of the most vulnerable sections of the city were decommissioned, while the department opened other stations in low-density, low-threat areas; the spatial distribution of firefighting assets remaining actually prevented resources from being applied where they were most critically needed. Great swaths of the city’s poorest neighborhoods burned to the ground as a direct result — most memorably the South Bronx, but immense tracts of Manhattan and Brooklyn as well. Hundreds of thousands of residents were displaced, many permanently, and the unforgettable images that emerged fueled perceptions of the city’s nigh-apocalyptic unmanageability that impeded its prospects well into the 1980s. Might a less-biased model, or a less politically-skewed application of the extant findings, have produced a more favorable outcome? This obviously remains unknowable…but the human and economic calamity that actually did transpire is a matter of public record.
Examples like this counsel us to be wary of claims that any autonomous system will ever be entrusted with the regulation and control of civic resources — just as we ought to be wary of claims that the application of some single master algorithm could result in an Pareto-efficient distribution of resources, or that the complex urban ecology might be sufficiently characterized in data to permit the effective operation of such an algorithm in the first place. For all of the conceptual flaws we’ve identified in the Siemens proposition, though, it’s the word “goal” that just leaps off the page. In all my thinking about cities, it has frankly never occurred to me to assert that cities have goals. (What is Cleveland’s goal? Karachi’s?) What is being suggested here strikes me as a rather profound misunderstanding of what a city is. Hierarchical organizations can be said to have goals, certainly, but not anything as heterogeneous in composition as a city, and most especially not a city in anything resembling a democratic society.
By failing to account for the situation of technological devices inside historical space and time, the diversity and complexity of the urban ecology, the reality of politics or, most puzzlingly of all, the “normal accidents” all complex systems are subject to, Siemens’ vision of cities perfectly regulated by autonomous smart systems thoroughly disqualifies itself. But it’s in this depiction of a city as an entity with unitary goals that it comes closest to self-parody.
If it seems like breaking a butterfly on a wheel to subject marketing copy to this kind of dissection, I am merely taking Siemens and the other advocates of the smart city at their word, and this is what they (claim to) really believe. When pushed on the question, of course, some individuals working for enterprises at the heart of the smart-city discourse admit that what their employers actually propose to do is distinctly more modest: they simply mean to deploy sensors on municipal infrastructure, and adjust lighting levels, headway or flow rates to accommodate real-time need. If this is the case, perhaps they ought to have a word with their copywriters, who do the endeavor no favors by indulging in the imperial overreach of their rhetoric. As matters now stand, the claim of perfect competence that is implicit in most smart-city promotional language — and thoroughly explicit in the Siemens material — is incommensurate with everything we know about the way technical systems work, as well as the world they work in. The municipal governments that constitute the primary intended audience for materials like these can only be advised, therefore, to approach all such claims with the greatest caution.
 For example, in New York City, an anonymous survey of “hundreds of retired high-ranking [NYPD] officials” found that “tremendous pressure to reduce crime, year after year, prompted some supervisors and precinct commanders to distort crime statistics” they submitted to the centralized COMPSTAT system. Chen, David W., “Survey Raises Questions on Data-Driven Policy,” The New York Times, 08 February 2010.
 Simon, David, Kia Corthron, Ed Burns and Chris Collins, The Wire, Season 4, Episode 9: “Know Your Place,” first aired 12 November 2006.
 Fletcher, Jim, IBM Distinguished Engineer, and Guruduth Banavar, Vice President and Chief Technology Officer for Global Public Sector, personal communication, 08 June 2011.
 Migurski, Michal. “Visualizing Urban Data,” in Segaran, Toby and Jeff Hammerbacher, Beautiful Data: The Stories Behind Elegant Data Solutions, O’Reilly Media, Sebastopol CA, 2012: pp. 167-182. See also Migurski, Michal. “Oakland Crime Maps X,” tecznotes, 03 March 2008.
 See, as well, Sen’s dissection of the inherent conflict between even mildly liberal values and Pareto optimality. Sen, Amartya Kumar. “The impossibility of a Paretian liberal.” Journal of Political Economy Volume 78 Number 1, Jan-Feb 1970.
 Forrester, Jay. Urban Dynamics, The MIT Press, Cambridge, MA, 1969.
 See Flood, Joe. The Fires: How a Computer Formula Burned Down New York City — And Determined The Future Of American Cities, Riverhead Books, New York, 2010.
 See, e.g. Bettencourt, Luís M.A. et al. “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences, Volume 104 Number 17, 24 April 2007, pp. 7301-7306.
 Flood, ibid., Chapter Six.
 Rider, Kenneth L. “A Parametric Model for the Allocation of Fire Companies,” New York City-RAND Institute report R-1615-NYC/HUD, April 1975; Kolesar, Peter. “A Model for Predicting Average Fire Company Travel Times,” New York City-RAND Institute report R-1624-NYC, June 1975.
 Perrow, Charles. Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984.
Over the weekend I finally got a chance to sit down with Theodore Spyropoulos‘s new book Adaptive Ecologies, which I’ve been looking forward to for a bit now. (Thanks, Steph!) Spyropolous is an instructor at London’s Architectural Association and director of the school’s Design Research Laboratory, and Adaptive Ecologies is his and his students’ attempt to push arguments about the computational generation of form a little further downfield.
The book’s subtitle says it all, sorta: “Correlated systems of living.” Broadly, the argument being made here is that new technologies allow us to fuse architecture’s formal qualities with its functional or performative ones. We can imagine the world populated with entirely new kinds of structures: each an active, adaptive mesh capable of responding to conditions of use, and expressing this response through its macroscopic physical manifestation, at every scale from unit (house) to cluster (building) to collective (megastructure or masterplan). What Spyropolous and his student-collaborators are trying to develop are the strategies or vocabularies one would use to devise structures like this.
Another way of putting things is to say that they’re attempting to link or join the two primary modes in which computation currently informs architecture. On one hand, we have the procedural, iterative, processor-intensive design techniques that have been in vogue for the past decade or so; on the other, we have the potential we’ve discussed so often here, that of networked informatics to endow structures and environments with the ability to sense and respond to varying conditions of occupancy, load or use. Adaptive Ecologies binds these threads together, and what results is a potent intellectual figure: smart city as architecture machine.
This is an intriguing argument, to say the least, and its evocation of urban space as a vast, active, living information system resonates profoundly with certain of my own concerns. Further, Spyropoulos admirably attempts to situate this work in its proper context, adducing a secret history in which his students’ towering blebs and polypy complexes recognizably descend from a lineage of minor heroes that includes Bucky Fuller, Archigram and the Japanese Metabolists, Gordon Pask and Cedric Price.
All of the usual tropes are present in Adaptive Ecologies: DLA and its manifestation in coral and Hele-Shaw cells; genetic algorithms, agent-based models and cellular automata; stigmergy and swarming logics; siphonophores and mangroves; even Frei Otto’s experiments with the self-organizing potential of wet thread.
But troublingly, these organic processes are used to generate designs that are not shown to be “adaptive” at all — at least not in the materials reproduced here. My primary beef with the book turns out to be the same I hold against the contemporary school of parametricists (which runs the entire gamut of seriousness, interest and credibility, from Zaha Hadid herself and her in-house ideologist Patrik Schumacher straight through to charlatans like Mitchell Joachim): that it fetishizes not merely form but the process of structuration. Or really, that it fetishizes the process of structuration to the detriment of usable form.
To make a fetish of these generative processes is to misunderstand their meaning, or to think that they are not already operating in our built environments. I promise you these algorithms of self-organization are always already there in the city — in the distribution of activities, in the dynamics of flow, in every last thing but the optical shape. The beehive’s form is epiphenomenal of its organizing logic, and so is the city’s. To reify such an organizing logic in the shape of a building strikes me as stumbling into a category error. Worse: as magical thinking, as though we’d made the rhizome an emblem of state to be carved in the façades of our buildings, where once we might have inscribed sheaves of wheat or birds of prey.
Consider the contribution of usual-suspect Makoto Sei Watanabe. Watanabe is an architect who believes that architecture must replace unreliable designerly inspiration with a Science valid in all times and places, and I’ve beaten up on him before. He’s represented here by a series of sculptures collectively called WEB FRAME, one version of which adorns the Iidabashi station of Tokyo’s Oedo subway line.
As is usual with Watanabe, he invokes “neural network[s], genetic algorithms and artificial intelligence” to explain the particular disposition of elements you can see in Iidabashi station. But WEB FRAME is best understood as an ornamental appliqué. It’s nicer to look at than a bare ceiling, arguably, but that’s all it is. Despite its creator’s rhetoric, its form at any given moment bears no relationship whatsoever to the flow of passengers through the subway system, the performative capacities of the station itself, or any potential regulation of either. It’s the outer sign of something, entirely detached from its substance. It adapts to nothing. It is, in a word, static.
Although it may be a particularly weak example, Watanabe’s work is marred by the same problems that afflict the more interesting work elsewhere in the volume:
- Not one of the projects illustrated uses parameters derived from real-time soundings to generate its form, even notionally. For some projects, the parameters used in an iterative design process appear to have been chosen specifically for the formal properties that result from their selection; for others, the seed values occupy an extremely wide range, producing a family of related design solutions rather than a single iconic form.
There’s nothing wrong, necessarily, with either approach. But unless I’m missing something really basic, the whole point of this exercise is to devise structures whose properties change over relatively short spans of time (minutes to months) in response to changing conditions. In turn, that would seem to imply some way of coupling the parameters driving the structures’ form to one or another value extracted from their local environment. And while all of the student work featured in the book draws on the beguilingly stochastic processes of structuration I enumerated above, only one of them claims to have used data gathered in this way as its input or seed state.
This is Team Shampoo‘s exploration of “hair-optimi[z]ed detour networks,” and it’s both wildly problematic in its own right and emblematic of the worrisome tendencies that run throughout the volume. Shampoo’s design for a tower complex uses autonomous computational agents to simulate morning and evening pedestrian flows through a district, and in turn uses these to derive “optimal” linkages and points of attachment for circulation structures hardwired into the urban fabric itself. The results are certainly striking enough, but they are precisely optimized: that is, narrowly perfected for one use case, and one use case only.
Of course, we know that conditions of pedestrian flow change over the course of the week, over the seasons of the year, with economic cycles and the particular disposition of services and amenities reflected in the city. A conventional street grid, especially one with short blocks, is already more adaptive to changes in these circumstances than any lattice of walk-tubes in the sky, because it allows people to choose from a far wider variety of alternative paths from origin to destination. In designs like Shampoo’s, we’re still making the same blunder Jane Jacobs accused the High Modernists of making: mistaking the appearance of something for its reality.
And if the point of all this applied parametricism is to permit each building or cluster of buildings to take on the form appropriate to the exigencies of the moment, that I can tell, only a single one of the projects featured appears in states responding to multiple boundary conditions. This is Team CXN-Reaction’s Swarm effort, which proposes housing units that collapse flat when not occupied, stacked in a snaky concertina reaching to the sky. (Admittedly, it’s difficult to put a finger on any particular purpose sufficient to justify this tactic of expansion and contraction, unless they’re arguing that the long-term maintenance of an unused unit is significantly cheaper in the collapsed state, but it does at least show a system that is in principle capable of multiple configurations.) So while Adaptive Ecologies itself acknowledges three registers of iterative design — behavioral, self-organizational and morphogenetic — it appears to be only the latter that is given any serious consideration.
- More seriously, none of the structures featured appear to be provided with any actual mechanism that would permit dynamic adaptation. We can be generous, and assume that these structures are notionally equipped with the sensors, actuators and other infrastructural componentry necessary to the work of transformation — designed, perhaps, by students in other modules of the AA, or left up to hands-on experimental practices like The Living. But nowhere in these renderings is any such thing stipulated (again, that I could tell on a first reading), and that makes the whole outing little more than a formal exercise.
I suppose the feeling is that it’s far too early in the prehistory of adaptive architecture for such details, which would be bound to obsolesce rapidly in any event. But even where there is a specific mechanism identified — notably Team Architecta’s rubber joint, permitting 360-degree rotation and a variety of geometric configurations — it’s never explained how it could possibly function as a component of anything but a model. Is it supposed to work hydraulically? Pneumatically? Through shape-memory myoelectrics? And how is access for maintenance and upgrade supposed to be accomplished? (Scaling even a few panes of one of Chuck Hoberman’s expanding surfaces to room size, and keeping the installation working under conditions of daily use, required constant physical debugging.) It’s hard to imagine, say, Bucky Fuller settling for a sketch of one of his tensegrity structures, and not working questions like these out in detail.
- No attempt is made to reconcile these formal possibilities with the way buildings are actually built. I am perfectly willing to believe that, at some point in the diiiiiistant future, self-powering, self-assembling, self-regulating structures will be “built” one molecule at a time. (At that point, the build/inhabit/maintain distinction would be meaningless, actually, as provisions for various kinds of shelter would presumably arise and subside as required.) But until and unless that point is reached, there will always be human fabricators, contractors and construction workers involved in the assembly of macroscale structures, and if what you intend to build is to be anything other than a one-off proof of concept, that means standardized processes at scale. Institutional and disciplinary conventions. Standard components. Generally-accepted practices and procedures. At no point do the structures described in Adaptive Ecologies coincide with any of these provisions of the contemporary praxis of production.
Again, yes: this is “just a design lab.” But where are these details to be worked out, if not in a design lab? Thousands of kids around the planet already know how to use Maya to crank out unbuildably biomorphic abstractions — functioning as a hinge between these “futuristic” visions and plans which might be realized is where the real discipline and the real inspiration now lie. (I won’t comment for now on the obvious irony that maintaining all of these structures as designed would require the most extraordinary specialist interventions in practice, taking them still further from the possibility that residents themselves could usefully modify or adapt them.)
- Finally, no attempt is made to reconcile these formal possibilities with any actual practice of living. In a book stuffed full of the most extravagant imagery, one illustration in particular — the work of Danilo Arsic, Yoshimasa Hagiwara and Hala Sheikh’s Team Architecta — stands out for me as an indication that the discipline is speaking only to itself. It features the by-now-familiar typology of a high-rise service-and-circulation core studded with plug-in living pods, the units of which rather resemble mutant avian skulls. Put aside for a second the certainty that this Kikutake- or Archigram-style typology, first articulated in the late 1950s, would have enveloped the globe by now if there were anything remotely appealing or useful about it. What concerns me here is the frankly malevolent appearance of Architecta’s take on the trope (which just between you and me strikes me as kind of awesome, but which I cannot imagine being built in any city this side of Deadworld).
I know, I know: tastes change over time, just as they vary from place to place. Still, who wants to live in a structure that looks like nothing so much as a ravening gyre of supremely Angry Birds? Unless you can somehow convince me that you could gather enough devotees of True Norwegian Black Metal in one place to populate a shrieking kvlt arcology, I think this one’s an index of parametric design’s weirdly airless inwardness.
I get that this is an aesthetic of the age — “gigaflop Art Nouveau,” I called it a few years back. (1998, to be precise.) But as an aesthetic, it can and should stand on its own, without being married to an entirely separate discourse about responsive urbanism. As a casebook of purely formal studies and strategies, Adaptive Ecologies is by and large reasonably convincing, and here and there very much so. It’s all the rhetoric about biomimetic or physiomimetic processes of structuration somehow leading to more, rather than less, flexible assemblages that’s its weakest point, and unfortunately that’s the very trellis that Spyropolous has used to train his vines on. I welcome and applaud what he’s up to in Adaptive Ecologies, but as far as I can tell the attempt to devise a vocabulary of dynamic form that is capable of change over relatively short time scales still awaits its fundamental pattern language.
And if nothing else, it’s surreal to look up from this book and gaze out the window onto a city where SHoP’s towers are considered architecturally daring, and in which the overwhelmingly fundamental problem isn’t the timidity of its design but the inability to provide all residents with decent, affordable housing.
Henri Lefebvre once asked, “Could it be that the space of the finest cities came into being after the fashion of plants and flowers in a garden?” I myself happen to believe that this is true not merely of the finest cities, but of all cities: that they are given form by generative processes as organic as any of those so beloved of the parametricists, operating at a scale and subtlety beyond the ability of any merely optical apparatus to detect. It is when we finally learn to take the measure of those processes that we will stand ready to author truly adaptive ecologies.
One final note: it’s only fair to point out that much of the work on view in Adaptive Ecologies is on the order of eight years old, and that a great deal can change in that kind of time. I sure as hell wouldn’t want to be held to every position I advanced in 2005.
An article I was commissioned to write for the Touch issue of What’s Next magazine.
What does it mean for a text to be digital?
In principle, it can be replicated in perfect fidelity, and transmitted to an unlimited number of recipients worldwide, at close to zero cost. Powerful analytic tools can be brought to bear on it, and our reading of it. It can be compared against other texts, plumbed for clues as to its provenance and authorship. Each of our acts of engagement with it — whether of acquisition, reading, or annotation — can be shared with our social networks, mobilized as props in an ongoing performance of self. Above all, it becomes (to use the jargon practically unavoidable in any discussion of information technology) “platform-agnostic.” This is to say that it becomes independent, to a very great degree, of the physical medium in which it currently happens to be instantiated.
To varying degrees, these things have been true as long as words have been encoded in ones and zeroes — certainly since 1971, when Project Gutenberg was founded with the intention of digitizing as much of the world’s literature as possible, and making it all available for free. Why is it the case, then, that digital books only seem to have entered our lives in any major way in the last two or three years?
The apparently sudden arrival of the digital text likely owes something to the top-of-mind quality Amazon currently enjoys in its main markets, its name and value proposition as prominent in our awareness as those of the grocery chains, television networks or airlines we patronize — a presence it’s taken the company the better part of the last fifteen years to build up. And it surely has something to do with the widespread popular facility with the tropes and metaphors governing our engagement with digital content of all sorts that has developed over the same period of time, to the point that it’s increasingly hard to meet a grandparent inconversant with downloads, torrents and the virtues of cloud storage.
But the fundamental reason is probably that bit about platform-agnosticism. Anyone so inclined could have “engaged digital text” on a conventional computer at any point in the past forty years. But the act of reading didn’t — and maybe couldn’t — properly come into its own in the digital era until there was a platform for literature as present to the senses as paper itself, something as well-suited to the digital text as the road is to the automobile. I refer, of course, to the networked tablet.
It’s only with the widespread embrace of these devices that digital reading has become ubiquitous. Relatively inexpensive, lightweight and comfortable in the hand, capable of storing thousands of volumes, the merits of the tablet as reading environment may strike us as self-evident. But there’s another factor that underlies its general appeal, and that is the specific phenomenology of the way we manipulate reading material when using one.
We read text on a tablet as pixels, just as we would on any screen. But the ways in which we physically address and move through a body of such pixels have more in common with the behaviors we learned from books in earliest childhood than with anything we picked up in the course of later encounters with computers. This is why the post-PC tablet feels more “intuitive” to us, despite the frank novelty of the gestures we must learn in order to use it, and which no book in the world has ever afforded: the swipe, the drag, the pinch, the tap.
This is the new tactility of reading. But where there are comparatively few semantically-meaningful ways in which the reader’s hand can meet the pages of a material book, the experience of engaging a digital text with the finger is subject to a certain variability. It’s not a boundless freedom — it’s delimited on one side by technological limitations, and on the other by the choices of an interaction designer — but it does require explication.
The first order of variability is the screen medium itself. Each of the major touchscreen technologies available — resistive, capacitive, projective-capacitive, optical — imposes its own constraints on the latency and resolution with which a screen registers a touch, and therefore how long one must place one’s finger against it to turn a page or select a word for definition or a passage for annotation. Reading on a good screen feels effortless, even transparent — but particularly high latency or low resolution can easily disrupt the flow of experience, lifting the reader up and out of the text entirely.
The second is the treatment of type. As critical as it is to the legibility and emotional resonance of a text, and even at the higher resolutions now theroetically available, typography is all but invariably treated as though it had not been refined over five centuries. It still feels like we are many years and product versions away from type on the tablet rendered with the craft and care it deserves.
A third order of variability consists in the separation of content, style and interface elements inherent in contemporary application design. This means that both the meaning of gestural interactions and the treatment of the page itself can vary from environment to environment. Especially given the pressure developers are under to differentiate their products from one another, a tap in the Kindle for iPad application may not mean precisely what a tap in Readmill or Instapaper or Reeder does, or work in at all the same way.
In fact, something as simple and as basic to the act of reading as turning a page is handled differently in all of these contexts.
Originally, of course, the pagination of text was an artifact of necessity, something imposed by running a semantically continuous text across a physically discontinuous quantity of leaves. One might think, therefore, that pagination would be among the first things to go in making the leap to the digital reading environment, but contemporary applications tend to retain it as a skeuomorphism, larding down the interaction with animated page curls and sound effects.
On the Kindle proper, the reader presses a button — one for page forward, another for page back — and the entire screen blanks and refreshes as the new page loads, a transition imposed by the nature of electronic pigment. In the Kindle app, by contrast, the page slides right to left, slipping from future to present to past in a series of discrete taps.
The Instapaper application is, perhaps, truest to the nature of digital copy. It dispenses with all of this, and treats the document as one continuous environment: swipe upward when you’re ready for more. Instapaper is an acknowledgment of the text’s liberation from the constraints of crude matter. Handled this way, there’s no reason a digital text can’t return to something approximating the book’s earliest form, a scroll — in this case, one capable of unspooling without limit.
Finally, we also need to account for what it means to absorb text as a luminous projection. Marshall McLuhan drew a distinction between “light-on” media — that is, those in which content inscribed on a passive surface like paper is illuminated by an external light source — and “light-through” media, like our luminous tablets; per his insistence that medium is coextensive with message, we can assume that the selfsame text consumed in these two ways would be received differently, emotionally every bit as much as cognitively.
As it happens, I have both an actual, e-paper Kindle — digital, but nevertheless light-on — and Kindle applications for the eminently light-through iPhone and iPad. And purely anecdotally, it does seem to be the case that I have an easier time with thornier, weightier reading on the e-paper device. Novels are fine on the iPad, even on my phone…but if I want to wrestle with Graham Harman or Susan Sontag, I reach for the Kindle.
The McLuhanite in me frets that, in embracing the tablet, we inadvertently give up much of our engagement with the text. That beyond sentimentality, there is something about the act of turning a page to punctuate a thought, or the phenomenology of light reflecting off of paper saturated with ink, that conditions the act of reading and makes it what we recognize it to be, at some level beneath the threshold of conscious perception.
Which brings us back, at last, to the printed artifact. We can acknowldge that the networked tablet is a brilliant addition to any reader’s instrumentarium. I’m certain that it increases the number of times and places at which people read, and know from long, intimate and sorrowful personal experience the difference it makes where the portability of entire libraries is concerned. But it’s not quite the same thing as a book or a magazine, and cannot entirely replace them.
Curiously enough, the ambitions to which paper appears to remain best-suited are diametrically opposite:
On the one hand, deep, thoughtful engagement with a body of language, an engagement that fully leverages the craft of bookmaking. In this pursuit, the tablet cannot yet offer nearly the typographic nicety, conscious design for legibility or perceptual richness trivially available from ink on paper — all of the things, in other words, that permit the reader to immerse herself for longer, and with less strain.
But there are also occasions on which surface is all important, where the ostensible content is almost incidental to the qualities of its packaging. Here the texture or other phenomenological qualities of paperstock itself — even its smell — communicate performatively; I think of glossy lifestyle magazines. It’s hard to imagine any tablet or similar device affording these virtues in anything like the near term.
If we understand a book as a container, the precise shape that container takes ought to reflect the nature of its intended contents, and what one proposes to do with them. In acknowledging all the many virtues of networked, digital texts, the texture, tooth and heft of paper will ensure that for at least the contexts I’ve specified here, it remains irreplaceable among all the ways we contain thought as it flows from one human mind to another.