Archive | Interactions and experiences RSS for this section

Every user a developer, part II, or: Momcomp

I wanted to take up a challenge Mike Migurski inadvertently laid down in comments the other day, in his response to my piece on the ongoing democratization of development for interactive systems.

I get where he’s coming from, especially his point about the moving-goalpost definition of “ease of use.” But I’m not convinced that there isn’t a whole lot further we could take tools like App Inventor toward making them painless for ordinary people to use, and I think — if you’ll forgive me, Mike — he’s mistaking my point about alternatives to the app paradigm. (It’s a little inside-baseball, but in brief: I acknowledge that the contemporary thrust of development is about things that happen in the browser, and that many “apps” are essentially specialized browsers. I just happen to believe that, despite the relative accessibility of tools like Apple’s iOS SDK, this whole model is still unnecessarily intimidating, and based on paradigms most users simply won’t get.)

So I thought I’d try a little thought experiment, to see if I couldn’t do a better job of getting my point across. I thought I’d start with a real, non-technically-inclined person, my mom, and a real challenge she used to confront on a several-times-a-month basis. And then I tried to imagine a toolkit which would allow her — as she is, and with little or no additional training — to build a custom module of functionality that would help her address this challenge, that she could use on an ad-hoc or near-realtime basis, and that would effectively lower the net frustration she experienced.

I have to say, right up front, that what I came up with is heavily, heavily dependent on circumstances which might never come to be. It posits a world in which there are widely-shared specifications for the description of networked objects we might encounter, whether those objects are people, places, things, or other kinds of system resources. And, of course, the open, shared, widely-adopted interoperability frameworks and standards that would allow us to bind these resources together and animate their interaction in useful ways. This, to put it mildly, is not the world we live in today. But it’s a world I’d like to see come to life, and if the best way to predict the future is to invent it, well: here’s my shot.

This is the use case. My mom lives in the Princeton, NJ area, a reasonably typical sweep of American suburbia that’s almost entirely predicated on automobility. Somewhere between two and five times a month, though, on a not-always-predictable basis, she has to drive to the nearest New Jersey Transit station, Princeton Junction, and there catch the train into New York City. Between the routine congestion of the area, the vagaries of the NJT timetable, and the hassle of finding parking at the station, she’s generally hugely stressed out by having to predict from among the available options the routing that will get her to the station in time, let her find a place to park, buy a ticket, and catch a given train. Our meetings in New York are generally subject to a back-and-forth flurry of last-minute phone calls: which train is she aiming for? What’s the traffic situation on Route 206? How about Route 1? Which train did she actually make? It’s not much fun, on either end, and yet something like this is how a great many people go about suturing their lives together, even in an age when information about most of the particulars here (time, location, traffic, timetable) exists, and is readily accessible from the device she has in her pocket.

Now my mom is not, in the slightest, a stupid woman. She just doesn’t like “technology.” And although she’s comfortable with (even delighted by) the iPhone UI, like a great many people she’s not the kind of person who’s going to switch back and forth between a Google Maps app and a New Jersey Transit app and whatever else she needs to come up with a relevant answer. So not only do I want to give her a single tool that offers her just the information she needs, and nothing else, but I want to give her the power to build that tool herself, so it speaks to her in something approaching her own voice.

What you see in this PDF, therefore, is a schematic representation of a constellation of plug-and-play objects she’d be choosing from and fusing together to make her ad-hoc service. Each of these objects is represented by a graphic icon and each is characterized under the surface by an arbitrary number of attributes and (inherent, dynamic and relational) attribute values. By selecting high-level, self-describing objects relevant to what she wants to do, and then using an enhanced text editor to compose what is effectively a rebus providing operators for these arguments, someone like my mom — with no technical background, or interest in or inclination toward acquiring one — can make herself a highly useful module of functionality, suited to her immediate and particular needs. She could even bundle it into a wrapper and upload it back to the network, either for someone else in nearly-identical circumstances to use as-is, or for others to deconstruct and rebuild according to their own requirements, given objects more relevant to their own local conditions.

Is it “an app”? No, not really. It’s something more, and less. It’s “just” a natural-language, textual interface layer to some reasonably complicated multivariate calculations running in the background. And in this telling of things, anyway, she built this layer herself, from available modular components fused together in an exceedingly lightweight, “intuitive” development environment. (You, Mike and the baby Jesus will have to forgive me: I’ve represented these components as something resembling Legos.)

Now I’m always a little concerned, when pushing something like this out, that I’m making myself look like that grizzled guy we’ve all encountered, wedged into a booth at All-Nite Donuts, guzzling serial cups of black coffee and scrawling his incoherent Grand Unified Theory of Everything across a stack of sweat-wrinkled legal pads. Nobody is more aware than me that there are holes in this schema you could drive a Northeast Corridor commuter train through. But I think it does a better job than I’ve yet been able to manage on two counts:

- It makes a better case than I was able to previously, regarding how easy the composition of complex functionality can and ought to be;

- And it lays out in black and white just what geomorphic feats of heavy lifting need to be taken care of in the background before any such vision could come to pass.

The things which I’ve painted as trivial here are admittedly anything but. But they are, I sincerely believe, how we’re going to handle — have to handle — the human interface to this so-called Internet of Things we keep talking about. Each of the networked resources in the world, whether location or service or object or human being, is going to have to be characterized in a consistent, natural, interoperable way, and we’re going to have to offer folks equally high-level environments for process composition using these resources. We’re going to have to devise architectures and frameworks that let ordinary people everywhere interact with all the networked power that is everywhere around them, and do so in a way that doesn’t add to their existing burden of hassle and care.

Momcomp, in other words. It’s an idea whose time I believe has come.

***
I hope you enjoy the PDF I ginned up to illustrate my above contentions. You’re free to take and use and rework it in any way you want and for what purpose you will, just so long as the use is noncommercial and you identify me as the source author. You can find the full terms of the Creative Commons license under which it’s provided to you here.

I’m shutting down threaded comments, by the way; regrettably, this otherwise-lovely theme doesn’t handle them particularly well. This has the particularly irritating consequence of rendering existing threaded discussions all but incoherent, for which I apologize. I’ve written to the theme author to see if there may be a solution. In the meantime, please try to make do. Thanks.

Every user a developer: A brief history, with hopeful branches

Google’s recent announcement of App Inventor is one of those back-to-the-future moments that simultaneously stirs up all kinds of furtive and long-suppressed hope in my heart…and makes me wonder just what the hell has taken so long, and why what we’re being offered is still so partial and wide of the mark.

I should explain. At its simplest, App Inventor does pretty much what it says on the tin. The reason it’s generating so much buzz is because it offers the non-technically inclined, non-coders among us an environment in which we can use simple visual tools to create reasonably robust mobile applications from scratch — in this case, applications for the Android operating system.

In this, it’s another step toward a demystification and user empowerment that had earlier been gestured at by scripting environments like Apple’s Automator and (to a significantly lesser degree) Yahoo! Pipes. But you used those things to perform relatively trivial manipulations on already-defined processes. I don’t want to overstate its power, especially without an Android device of my own to try the results on, but by contrast you use App Inventor to make real, usable, reusable applications, at a time when we understand our personal devices to be little more than a scrim on which such applications run, and there is a robust market for them.

This is radical thing to want to do, in both senses of that word. In its promise to democratize the creation of interactive functionality, App Inventor speaks to an ambition that has largely lain dormant beneath what are now three or four generations of interactive systems — one, I would argue, that is inscribed in the rhetoric of object-oriented programming itself. If functional units of executable code can be packaged in modular units, those units in turn represented by visual icons, and those icons presented in an environment equipped with drag-and-drop physics and all the other familiar and relatively easy-to-grasp interaction cues provided us by the graphical user interface…then pretty much anybody who can plug one Lego brick into another has what it takes to build a working application. And that application can both be used “at home,” by the developer him- or herself, and released into the wild for others to use, enjoy, deconstruct and learn from.

There’s more to it than that, of course, but that’s the crux of what’s at stake here in schematic. And this is important because, for a very long time now, the corpus of people able to develop functionality, to “program” for a given system, has been dwindling as a percentage of interactive technology’s total userbase. Each successive generation of hardware from the original PC onward has expanded the userbase — sometimes, as with the transition from laptops to network-enabled phones, by an order of magnitude or more.

The result, unseemly to me, is that some five billion people on Earth have by now embraced interactive networked devices as an intimate part of their everyday lives, while the tools and languages necessary to develop software for them have remained arcane, the province of a comparatively tiny community. And the culture that community has in time developed around these tools and languages? Highly arcane — as recondite and unwelcoming, to most of us, as a klatsch of Comp Lit majors mulling phallogocentrism in Derrida and the later works of M.I.A.

A further consequence of this — unlooked-for, perhaps, but no less significant for all of that — is that the community of developers winds up having undue influence over how users conceive of interactive devices, and the kinds of things they might be used for. Alan Kay’s definition of full technical literacy, remember, was the ability to both read and write in a given medium — to create, as well as consume. And by these lights, we’ve been moving further and further away from literacy and the empowerment it so reliably entrains for a very long time now.

So an authoring environment that made creation as easy as consumption — especially one that, like View Source in the first wave of Web browsers, exposed something of how the underlying logical system functioned — would be a tremendous thing. Perhaps naively, I thought we’d get something like this with the original iPhone: a latterday HyperCard, a tool lightweight and graphic and intuitive as the device itself, but sufficiently powerful that you could make real things with it.

Maybe that doesn’t mesh with Apple’s contemporary business model, though, or stance regarding user access to deeper layers of device functionality, or whatever shoddy, paternalistic rationale they’ve cooked up this week to justify their locking iOS against the people who bought and paid for it. And so it’s fallen to Google, of all institutions, to provide us with the radically democratizing thing; the predictable irony, of course, is that in look and feel, the App Inventor composition wizard is so design-hostile, so Google-grade that only the kind of engineer who’s already comfortable with more rigorous development alternatives is likely to find it appealing. The idea is, mostly, right…but the execution is so very wrong.

There’s a deeper issue still, though, which is why I say “mostly right.” Despite applauding any and every measure that democratizes access to development tools, in my heart of hearts I actually think “apps” are a moribund way of looking at things. That the “app economy” is a dead end, and that even offering ordinary people the power to develop real applications is something of a missed opportunity.

Maybe that’s my own wishful thinking: I was infected pretty early on with the late Jef Raskin’s way of thinking about interaction, as explored in his book The Humane Interface and partially instantiated in the Canon Cat. What I took from my reading of Raskin is the notion that chunking up the things we do into hard, modal “applications” — each with a discrete user interface, each (still!) requiring time to load, each presenting us with a new learning curve — is kind of foolish, especially when there are a core set of operations that will be common to virtually everything you want to do with a device. Some of this thinking survives in the form of cross-application commands like Cut, Copy and Paste, but still more of it has seemingly been left by the wayside.

There are ways in which Raskin’s ideas have dated poorly, but in others his principles are as relevant as ever. I personally believe that, if those of us who conceive of and deliver interactive experiences truly want to empower a userbase that is now on the order of billions of people, we need to take a still deeper cut at the problem. We need to climb out of the application paradigm entirely, and figure out a better and more accessible way of representing distributed computational processes and how to get information into and out of them. And we need to do this now, because we can clearly see that those interactive experiences are increasingly taking place across and between devices and platforms — at first for those of us in the developed world, and very soon now, for everyone.

In other words, I believe we need to articulate a way of thinking about interactive functionality and its development that is appropriate to an era in which virtually everyone on the planet spends some portion of their day using networked devices; to a context in which such devices and interfaces are utterly pervasive in the world, and the average person is confronted with a multiplicity of same in the course of a day; and to the cloud architecture that undergirds that context. Given these constraints, neither applications nor “apps” are quite going to cut it.

Accordingly, in my work at Nokia over the last two years, I’ve been arguing (admittedly to no discernible impact) that as a first step toward this we need to tear down the services we offer and recompose them from a kit of common parts, an ecology of free-floating, modular functional components, operators and lightweight user-interface frameworks to bind them together. The next step would then be to offer the entire world access to this kit of parts, so anyone at all might grab a component and reuse it in a context of their own choosing, to develop just the functionality they or their social universe require, recognize and relate to. If done right, then you don’t even need an App Inventor, because the interaction environment itself is the “inventor”: you grab the objects you need, and build what you want from them.

One, two, many Facebooks. Or Photoshops. Or Tripits or SketchUps or Spotifys. All interoperable, all built on a framework of common tools, all producing objects in turn that could be taken up and used by any other process in the weave.

This approach owes something to Ben Cerveny’s seminal talk at the first Design Engaged, though there he was primarily concerned with semantically-tagged data, and how an ecosystem of distributed systems might make use of it. There’s something in it that was first sparked by my appreciation of Jun Rekimoto’s Data Tiles, and it also has some underlying assumptions in common with the rhetoric around “activity streams.” What I ultimately derive from all of these efforts is the thought that we (yes: challenge that “we”) ought to be offering the power of ad-hoc process definition in a way that any one of us can wrap our heads around, which would in turn underwrite the most vibrant, fecund/ating planetary ecosystem of such processes.

In this light, Google’s App Inventor is both a wonderful thing, and a further propping-up of what I’m bound to regard as a stagnating and unhelpful paradigm. I’m both excited to see what people do with it, and more than a little saddened that this is still the conversation we’re having, here in 2010.

There is one further consideration for me here, though, that tends to soften the blow. Not that I’m at all comparing myself to them, in the slightest, but I’m acutely aware of what happens to the Ted Nelsons and Doug Engelbarts of the world. I’ve seen what comes of “visionaries” whose insight into how things ought to be done is just that little bit too far ahead of the curve, how they spend the rest of their careers (or lives) more or less bitterly complaining about how partial and unsatisfactory everything that actually does get built turned out to be. If all that happens is that App Inventor and its eventual, more aesthetically well-crafted progeny do help ordinary people build working tools, firmly within the application paradigm, I’ll be well pleased — well pleased, and no mistake. But in some deeper part of me, I’ll always know that we could have gone deeper still, taken on the greater challenge, and done better by the people who use the things we make.

We still can.

What Apple needs to do now

Update, 10 June 2013: Vindicated!

I’ve been in San Francisco for a day or so, on my way up to O’Reilly’s Foo Camp. This in itself is already happy-making, but when I found myself jetlagged and wide-awake in yesterday’s dawny gloaming and realized where I was (three blocks from the flagship Apple Store) and what day it was (!!), my schedule for the day was foreordained.

I performed quick ablutions, picked up a tall coffee to go, and met free-at-last Tom Coates a little after six in the morning, on what was already a nontrivial line. Lots of free energy drinks, doughnuts, and burritos and eight hours later, I was ushered into The Presence; after the usual provisioning and activation hassles, I left the store with a gorgeous, brand-spankin’-new iPhone 4.

And it truly is gorgeous, y’know? In its formal qualities, this Mk IV represents a significant advance over the last iteration — which I never cared for, as it looked and felt cheap — and a return to Jony Ive’s long-term effort to reinscribe a Ramsian design ethic in the market for 21st century consumer products. As an object, it just about cannot be faulted. Mmmmm.

Oh, but that interface. Or more particularly, the design of applications and utilities. The worrisome signs that first cropped up in the iPhone 3G Compass app, and clouded the otherwise lovely iPad interaction experience, are here in spades. What’s going on here is an unusual, unusually false and timid choice that, in the aggregate, amounts to nothing less than a renunciation of what these devices are for, how we think of them, and the ways in which they might be used.

I’m talking about the persistent skeuomorphic design cues that spoor applications like Calendar, Compass, iBooks and the truly awful Notes. The iPhone and iPad, as I argued on the launch of the original in 2007, are history’s first full-fledged everyware devices — post-PC interface devices of enormous power and grace — and here somebody in Apple’s UX shop has saddled them with the most awful and mawkish and flat-out tacky visual cues. You can credibly accuse Cupertino of any number of sins over the course of the last thirty years, but tackiness has not ordinarily numbered among them.

Dig, however, the page-curl animation (beautifully rendered, but stick-in-the-craw wrong) in iBooks. Feast your eyes on the leatherette Executive Desk Blotter nonsense going on in Notes. Open up Calendar, with its twee spiral-bound conceit, and gaze into the face of Fear. What are these but misguided coddles, patronizing crutches, interactively horseless carriages?

Lookit: a networked, digital, interactive copy of, say, the Tao Te Ching is simultaneously more and less than the one I keep on my shelf. You give up the tangible, phenomenological isness of the book, and in return you’re afforded an extraordinary new range of capabilities. Shouldn’t the interface, y’know, reflect this? A digital book read in Kindle for iPad sure does, as does a text saved to the (wonderful, indispensable) Instapaper Pro.

The same thing, of course, is true of networked, digital, interactive compasses and datebooks and notepads. If anything, the case is even less ambivalent here, because in all of these instances the digital version is all-but-unalloyed in its superiority over the analogue alternative. On the iPad, only Maps seems to have something of the quality of a true network-age cartography viewer.

I want to use the strongest language here. This is a terribly disappointing renunciation of possibility on Apple’s part, a failure to articulate an interface-design vocabulary as “futuristic” as, and harmonious with, the formal vocabulary of the physical devices themselves. One of the deepest principles of interaction design I observe is that, except in special cases, the articulation of a user interface should suggest something of a device, service or application’s capabilities and affordances. This is clearly, thoroughly and intentionally undermined in Apple’s current suite of iOS offerings.

What Apple has to do now is find the visual language that explains the difference between a networked text and a book, a networked calendar entry and a page leaf, or a networked locational fix and a compass heading, and does so for a mass audience of tens or hundreds of millions of non-science-fiction-reading, non-interface-geek human users. The current direction is inexplicable, even cowardly, and the task sketched here is by no means easy. But if anybody can do this, it’s the organization that made generations of otherwise arcane propositions comprehensible to ordinary people, that got out far enough ahead of the technology that their offerings Just Worked.

Application interfaces as effortlessly twenty-minutes-into-the-future as every other aspect of the iPad experience? Now that truly would be revolutionary and magical. I don’t think it’s too much to ask for, or to expect.

Join us in Helsinki on May 22nd for a Touchscapes workshop (updated)

Just in case folks here in town are feeling neglected, fear not: we’re doing events here as well.

As part of Helsinki’s World Design Capital 2010 Ideas Forum, and collaboration with our good friends at Nordkapp, I’m delighted to announce a workshop called “Touchscapes: Toward the next urban ecology.”

Touchscapes is inspired, in large part, by our frustration with the Symbicon/ClearChannel screens currently deployed around Helsinki, how little is being done with them, and how far short of their potential they’ve fallen. Our sense is that we are now surrounded by screens as we move through the city — personal devices, shared interactive surfaces, and now even building-sized displays — and if thinking about how to design for each of these things individually was hard enough, virtually nobody has given much thought to how they function together, as a coherent informational ecosystem.

Until now, that is, because that’s just what we aim to do in the workshop. Join us for a day of activity dedicated to understanding the challenges presented by this swarm of screens, the possibilities they offer for tangible, touch-based interaction, and their implications for the new urban information design. We’ll move back and forth between conceptual thinking and practical doing, developing solid ideas about making the most meaningful use of these emerging resources culturally, commercially, personally and socially.

Attendance is free, but spaces in the workshop are limited, so I recommend you sign up at Nordkapp on the Facebook event page as soon as you possibly can. See you on the 22nd!

How to bring a Systems/Layers walkshop to your town

Crossposted with Do projects.

The response to the Systems/Layers walkshop we held in Wellington a few months back was tremendously gratifying, and given how much people seem to have gotten out of it we’ve been determined to set up similar events, in cities around the planet, ever since. (Previously on Do, and see participant CJ Wells’s writeup here.)

We’re fairly far along with plans to bring Systems/Layers to Barcelona in June (thanks Chris and Enric!), have just started getting into how we might do it in Taipei (thanks Sophie and TH!), and understand from e-mail inquiries that there’s interest in walkshops in Vancouver and Toronto as well. This is, of course, wonderfully exciting to us, and we’re hoping to learn as much from each of these as we did from Wellington.

What we’ve discovered is that the initial planning stages are significantly smoother if potential sponsors and other partners understand a little bit more about what Systems/Layers is, what it’s for and what people get out of it. The following is a brief summary designed to answer just these questions, and you are more than welcome to use it to raise interest in your part of the world. We’d love to hold walkshops in as many cities as are interested in having them.

What.
Systems/Layers is a half-day “walkshop,” held in two parts. The first portion of the activity is dedicated to a slow and considered walk through a reasonably dense and built-up section of the city at hand. What we’re looking for are appearances of the networked digital in the physical, and vice versa: apertures through which the things that happen in the real world drive the “network weather,” and contexts in which that weather affects what people see, confront and are able to do.

Participants are asked to pay particular attention to:

- Places where information is being collected by the network.
- Places where networked information is being displayed.
- Places where networked information is being acted upon, either by people directly, or by physical systems that affect the choices people have available to them.

You’ll want to bring seasonally-appropriate clothing, good comfortable shoes, and a camera. We’ll provide maps of “the box,” the area through which we’ll be walking.

This portion of the day will take around 90 minutes, after which we gather in a convenient “command post” to map, review and discuss the things we’ve encountered. We allot an hour for this, but since we’re inclined to choose a command post offering reasonably-priced food and drink, discussion can go on as long as participants feel like hanging out.

Who.
Do projects’ Nurri Kim and Adam Greenfield plan and run the workshop, with the assistance of a qualified local expert/maven/mayor. (In Wellington, Tom Beard did a splendid job of this, for which we remain grateful.)

We feel the walkshop works best if it’s limited to roughly 30 participants in total, split into two teams for the walking segment and reunited for the discussion.

How.
In order for us to bring Systems/Layers to your town, we need the sponsorship of a local arts, architecture or urbanist organization — generally, but not necessarily, a non-profit. They’ll cover the cost of our travel and accommodation, and defray these expenses by charging for participation in the walkshop. In turn, we’ll ensure both that the registration fee remains reasonable, and that one or two scholarship places are available for those who absolutely cannot afford to participate otherwise.

If you’re a representative of such an organization, and you’re interested in us putting on a Systems/Layers walkshop in your area, please get in touch. If you’re not, but you still want us to come, you could try to put together enough participants who are willing to register and pay ahead of time, so we could book flights and hotels. But really, we’ve found that the best way to do things is to approach a local gallery, community group or NGO and ask them to sponsor the event.

At least as we have it set up now, you should know that we’re not financially compensated in any way for our organization of these walkshops, beyond having our travel, accommodation and transfer expenses covered.

When.
Our schedule tends to fill up 4-6 months ahead of time, so we’re already talking about events in the (Northern Hemisphere) spring of 2011. And of course, it’s generally cheapest to book flights and hotels well in advance. If you think Systems/Layers would be a good fit for your city, please do get in touch as soon as you possibly can. As we’ve mentioned, we’d be thrilled to work with you, and look forward to hearing from you with genuine anticipation and excitement. Wellington was amazing, Barcelona is shaping up to be pretty special, and Taipei, if we can pull it off, will be awesome. It’d mean a lot to us to add your city to this list. Thanks!

Free mobility, social mobility…transmobility (part III)

This last installment of our series (I, II) on networked mobility is more of a coda than anything else, and it goes directly to the question of systemic cost, and who bears it. (In the interest of full disclosure, I ought to mention that I’ve been having some lovely conversations with Snapper, the company that provides farecard-based payment services to the transit riders of Wellington, and now Auckland as well, and that I have a stake in the success of their endeavor.)

Any time you’re shifting atoms on the scale presented by even a small town’s transit infrastructure, there’s obviously going to be expense involved, and that has to be recovered somehow. Maintaining such a network once you’ve brought it into being? Another recurring expense, on a permanent basis. Rolling stock, of course, doesn’t grow on trees. Training and paying the front- and back-of-house staff — the people who oversee operations, design the signs, drive the trams, clean the stations, even the folks who get to snap on blue latex and haul the belligerent piss-drunks off the buses — another enormous ongoing outlay. Pensions, unplanned overtime, insurance coverage: these things don’t pay for themselves. All stipulated.

So why do I still believe that transit ought to be free to the user?

Because access to good, low- or no-cost public institutions clearly, consistently catalyzes upward social mobility. This was true in my own family — the free CUNY system was my father’s springboard out of the working class — and it continues to be quantifiably true in the context of urban transportation. The returns to society are the things most all of us, across the center of the political spectrum broadly defined, at least claim to want: greater innovation, a healthier and more empowered citizenry, and an enhanced tax base, for starters.

I’m going to make a multi-stage argument, here, first about the optimal economic design of public transit systems, and later about how the emergent networked technologies I’m most familiar with personally might best support the measures and policies I believe to be most sound. Most of what you’re about to read is bog-standard public-policy stuff; only toward the end does it veer toward the kind of Everyware-ish material regular readers of this blog will be comfortable with, and everyone else may find a little odd. Politically, its assumptions ought to be palatable to a reasonably wide swath of people, from social democrats on the center-left to pro-business Republicans on the right; with suitable modifications, anarchosyndicalists shouldn’t find too much that would give them heartburn.

- Let’s start with the unchallenged basics. Access to reliable transportation allows people to physically get to jobs, education and vital services (e.g. childcare) they might not otherwise.

- Jobs obviously have a direct effect on household wealth; post-secondary education tends to open up higher-paying employment opportunities, and generates other beneficial second-order effects; and services like reliable childcare allow people to accept (formal and informal) employment with time obligations they would not otherwise be able to accommodate.

- A regional transportation grid sufficiently supple to connect the majority of available jobs with workers rapidly and efficiently is never going to be cheap.

- The return on such an investment is, however, considerable — when savings due to reduced road and highway depreciation, etc., are considered as well as direct benefits, on the order of 2.5:1. This isn’t even remotely in the same galaxy as the kind of multiples that get VCs hot & bothered, but it’s not at all bad for a public-sector expenditure. (Note, too, that the proportion of systemic costs generally retired due to user fees is comparatively small.)

- Being able to spread the fixed costs of a transit system over a significantly expanded ridership would increase the economic efficiency of that system, and thus represent a different kind of savings. Given two types of riders — dependent, people for whom public transit is their only real option, and discretionary, folks who choose public transit over other modes only if it’s markedly cleaner, safer, more convenient, cheaper, etc. — how to maximize both?

- Increasing dependent ridership is relatively easy. I’m going to propose that a greater expansion in the number of transit riders would be achieved by reducing the cost of ridership from relatively-low to zero than by a comparable reduction from relatively-high to relatively- or even absolutely low. Another way of putting it is to say that a significant number of potential riders are dissuaded by the presence of any fare at all. (Strictly speaking, a reduction of fees to zero would be a Pareto-optimal outcome, though this is true only if we agree to consider genuine concerns like increased crowding and greater systemic wear-and-tear from higher loads as externalities. Which, of course, they are not.)

- Maxing out the number of discretionary riders is a little tougher. What both dependent and discretionary riders have in common, though, is the requirement that network apertures be located in as close proximity as is practically achievable to origins and foreseeable destinations. And here’s where the argument arcs back toward the things we we’ve been talking about over the last week, because the transmobility system described accommodates just this desire, by forging discrete modal components into coherent journeys. Trip segments dependent on more finely-grained modes like walking, shared bikes or shared cars, primary at origins and destinations, are designed to dovetail smoothly with the systems responsible for trunk segments, like buses, BRT, light rail, subways, metros and ferries.

That transit system is of most social and economic value to a region which fuses the greatest number of separate transportation modes and styles into a coherent network; which minimizes friction at interline and intermodal junctures; and which does this all while presenting a cost to the rider no greater than zero.

Fully subsidizing any such system would be expensive…inarguably so, immoderately so. But if my conjecture is right — and oh, how I would love to see data addressing the question, one way or the other — a total subsidy produces disproportionate benefits even as compared to a generous subsidy. Success on this count would be the ultimate refutation of the zero-sum governance philosophy that took hold in the outsourcin’, rightsizin’ States during the 1990s, and has more recently and unaccountably migrated elsewhere. (I say “unaccountably” because you’d think people would have learned from America’s experience with what happens when you leave things in the hands of a “CEO President.” And also because, well, there hasn’t turned out to be much in the way of accountability for all of that, has there?) Municipalities ought to be conceiving of transit fees not as a potential revenue stream, but as a brake on a much bigger and more productive system.

To me, this isn’t a fantasy, but rather a matter of attending to the demands of basic social justice. For all too many, bad transport provisioning means getting fired because they couldn’t get to work on time, despite leaving the house at zero-dark-thirty. Or not getting hired in the first place, because they showed up late to the interview. Or not being able to take a job once offered, because the added expense of an extra bus trip to put the baby in daycare would burn every last cent one might otherwise eke out of a minimum-wage gig. Anyone who’s ever been trapped by circumstances like these intimately understands cascading failure in the for-want-of-a-nail mode. (Not buying it? See if you can’t dig up a copy of Barbara Ehrenreich’s seminal Nickel and Dimed.)

I’ve recently and persuasively seen privilege defined — and thanks, Mike, for digging up the link — as when one’s “social and economic networks tend to facilitate goals, rather than block them.” As I sit here right now, my mobility options are as infinitely finely grained as present-day practices and technologies can get them: which is to say that my transportation network, too, facilitates the accomplishment of whatever goal I devise for it, whether that means getting to the emergency room, my job, the SUNN O))) gig, the park or the airport. What I’ve here called “transmobility” is an opportunity to use our best available tools and insights to extend that privilege until it becomes nothing of the sort.

Finally: How do I expect my friends at Snapper to make any money, if everything I imagine above comes to pass? Even stipulating that cost to user is zero, there are multiple foreseeable transmobility models where a farecard is necessary to secure access and to string experiences together, before even considering the wide variety of non-fare-based business use cases. And anyway, my job is to help people anticipate and prepare for emerging opportunity spaces, not to artificially preserve the problem to which they are currently the best solution.

OK, I’ve gone all SUPERTRAIN on you for umpty-two-hundred words now; I need a break, and I’m sure you do too. I fully expect, though, that two or maybe even three of you will have plowed all the way to the bottom of this, and are even now preparing to launch the salvos of your corrective discipline, in an attempt to redress faulty assumptions, inflated claims & other such lacunae in my argumentation as you may stumble over. Trust me when I say that all such salvos will be welcome.

Transmobility, part II

Part II of our exploration of transmobility. I want to caution you, again, that this is very much a probe.

Perhaps it’s best to start by backing up a few steps and explaining a little better what I’m trying to do here. What I’m arguing is that the simple act of getting around the city is in the process of changing — as how could it not, when both paths themselves and the vehicles that travel them are becoming endowed with the power to sense and adapt?

Accordingly, I believe we need to conceive of a networked mobility, a transmobility: one that inherently encompasses different modes, that conceptualizes urban space as a field to be traversed and provides for the maximum number of pathways through that field, that gathers up and utilizes whatever resources are available, and that delivers this potential to people in terms they understand.

Yesterday, I posed the question as to how we might devise a transmobility that met all of these conditions, while at the same time acknowledging two additional, all-but-contradictory desiderata. These were the desire, on the one hand, to smoothen out our interactions with transit infrastructure until vehicular transportation becomes as natural as putting one foot in front of another, and on the other to fracture journeys along their length such that any arbitrary point can become a node of experience and appreciation in and of itself. Any system capable of meeting these objectives would clearly present us with a limit case…but then, I believe that limits are there to be approached.

Finally, I’m addressing all of these questions from a relatively unusual disciplinary perspective, which is that of the service, interaction or experience designer. The downside of this is that I’m all but certainly disinterring matters a professional transit planner or mobility designer would regard as settled questions, while missing the terms of art or clever hacks they would call upon as second nature. But there’s a significant upside, too, which is that I’m natively conversant with the interactive systems that will increasingly condition any discussion of mobility, both respectful of their power and professionally wary of the representations of reality that reach us through them.

So petrified, the landscape grows

In addressing the questions I posed yesterday, then, I’m inclined to start by holding up for examination some of the ways in which trips, routes and journeys are currently represented by networked artifacts. Maybe there’s something that can be gleaned from these practices, whether as useful insights or musts-to-avoid.

I would start by suggesting that the proper unit of analysis for any consideration of movement through urban space has to be the whole journey. This means grasping the seemingly obvious fact that from the user’s perspective, all movement from origin to destination comprises a single, coherent journey, no matter how many times a change from mode to mode is required.

I say “seemingly obvious,” because the interactive artifacts I’m familiar with generally haven’t represented circumstances this way.

Take a simple example: a trip that involves walking to the nearest bus stop, riding the bus downtown, and finally walking from the point you alight from the bus to your ultimate destination. Some of the more supple route-planning applications already capture this kind of utterly normal experience — HopStop, for example, is quite good, at least in New York City — but you’d be surprised how many still do not. To date, they’ve tended to treat journeys in terms solely of their discrete component segments: an in-car GPS system plots automotive routes, a transit route-planner provides for trips from station to station, and so on.

But people think about movements through the city in terms that are simultaneously more personal and more holistic. We think of getting to work, stopping off to pick up a few things for dinner on the way home, or heading crosstown to meet friends for drinks.

So contemporary representations already seem well-suited to one of our criteria, in that the seams between methods of getting around are stark and clear, and perhaps even stark and clear enough to imply the self-directed moments of experience that attend a journey on either side. As far as a GPS display is generally concerned, what happens in the car stays in the car, and what happens next is up to you.

Certainly as compared to some overweening, totalizing system that aimed at doing everything and wound up doing none of it well, there’s something refreshing about this humility of ambition. On the other hand, though, such systems manifestly do not lend themselves well to depicting an important variety of end-to-end trips through the city, which are those trips that involve one or more changes of conveyance.

Think back to our rudimentary example, above. It would be useful if, for the portion of the journey on which you take the bus, that bus “understood” that it was essentially functioning as a connector, a linkage between one segment traversed on foot and another.

And this is still truer of journeys involving intermodal junctures where both traffic and the systemic requirements of timetables and schedules permit you less freedom in planning than walking or cycling might. Such journey plans need to be adjusted on the fly, drawing in data from other sources to accurately account for unfolding events as they happen, with signaling carried through to the infrastructure itself so that some delay, misrouting or rupture in the original plan results in the traveler being offered a panoply of appropriate alternatives.

What if, instead of living with the vehicle, the representational system lived with the traveler, and could move with them across and between modes? On this count, we’re obviously most of the way there already: with turn-by-turn directions provided by Google Maps, the iPhone and its Android-equipped competitors spell howling doom for the single-purpose devices offered by Garmin and TomTom. The emergence of truly ambient approaches to informatic provisioning would guarantee that a traveler never lacked for situational awareness, whether or not they had access to personal devices at any given moment.

What if we could provide these systems with enough local intelligence to “know” that a specified endpoint offers n possibilities for onward travel? What if this intelligence was informed by a city’s mesh of active public objects, so that travel times and schedules and real-time conditions could all be taken into account? And finally, instead of presenting journey segments as self-contained, what if we treated them as if they enjoyed magnet physics?

Then, should you want (or be forced by exigencies beyond your control) to alter your travel plans, you could snap out the mode you’re currently using, and swap in another that met whatever bounding constraints you specified, whether those had to do with speed or accessibility or privacy or shelter from the weather. The RATP‘s head of Prospective and Innovative Design, Georges Amar, speaks of enabling transmodality, and this is just what we begin to approach here.

The distinction I’m trying to capture is essentially the same as that Lucy Suchman drew between global, a priori plans on the one hand and situated actions on the other. The result would be a more responsive journey-planning system that, given any set of coordinates in space and time, is capable of popping its head up, having a look around and helping you determine what your best options are.

Moments in modal culture

This isn’t to say that we don’t also conceive of mobility in terms of particular modes of travel, and all the allegiances and affinities they give rise to. As Ivan Illich put it, “Tell me how fast you go, and I’ll tell you who you are.”

It’s not simply the coarser distinctions that tell, either. These shades of meaning and interpretation are crucial even among and between people who share a mode of transport: a fixie rider self-evidently has a different conception of the human-bicycle mesh than a Brompton fan does, while New Yorkers will know perfectly well what I mean if I distinguish two friends by describing them respectively as a 6 train rider and a 7 type. (Though not directly analogous, you can summon up similar images by evoking the L Taraval versus the J Church, the Yamanote-sen against the Hibiya-sen, or the 73 bus against the 15.)

Those of us who ride public transit form personal connections with our stops, our stations and even with particular linkages between lines, and I can only imagine that both our cities and our lives would be impoverished if we gave that up. But there’s no particular reason we need to; all I’m suggesting here is that the total journey needs to be represented as such by all the networked systems traversed in the course of a given outing.

Neither, in devising our transmobility system, can we afford to neglect the specificities and particularities of the component systems that furnish us with its articulated linkages. If one train line isn’t interchangeable with another in the hearts and minds of their riders, the same is true of other kinds of frameworks.

For example, we can’t merely plug some abstract shared bicycle service into the mesh of modal enablers and call it a day. Consider the differing fates of two apparently similar bike-share networks, the Parisian Vélib and Barcelona’s Bicing. In their diverging histories, we can see how differences in business model wind up percolating upward to impact level of service. By limiting the right to use Bicing to residents, by requiring that users open accounts, and having those accounts tied in to the usual variety of identification data, the system provides would-be bad actors with a strong disincentive. You’re personally liable, accountable…responsible.

There are real and problematic downsides to this approach, but the difference this set of decisions makes on the street is immediate and obvious. A rank of Vélib bikes, even in a posh neighborhood, looks like a bicyclical charnelhouse, with maybe three or four out of every five saddles reversed, in what has become Parisians’ folk indicator to one another that a particular bike is damaged to the point that it’s unavailable for use. The Bicing installations that I saw, including ones seeing very heavy use in core commercial districts, aren’t nearly as degraded.

This goes to the point I was trying to make, earlier, by contrasting the older conception of a vehicle as an object to the emergent way of understanding it as a service. Even though they may be physically identical — may draw current from the same grid, may be housed in the same lot, may present the driver with the selfsame control interface — a ZipCar Prius doesn’t function in just exactly the same way as a City CarShare Prius does. You could design a transmobility system so it accounted for either or (preferably) both…but not interchangeably.

Smooth sailing

Again, though I want to enable smooth transitions, I’m not arguing for perfect seamlessness in transit, or anything like it. Kevin Lynch reminds us, in The Image of the City, that “[a]ny breaks in transportation — nodes, decision points — are places of intensified perception.” We ought to welcome some of this heightened awareness, as a counterpoint to the automaticity that can all too easily accompany the rhythms of transit ridership, especially when experienced on a daily or twice-daily basis. On the other hand, it’s true that some of this “intensified perception” is almost certainly down to the anxiety that attends any such decision under circumstances of time pressure, human density and the urgent necessity to perform modal transitions correctly — and this is the fraction I’d argue we’d be better off designing out of transmobility.

At most, I mean for transmobility systems to bolster, not replace, human intuition. Where alternative modes or routings exist, we’re already generally pretty good at using them tactically to optimize against one or another criterion. Sometimes you know the subway’s the only way you can possibly beat the gridlock and get to your appointment on time; other times you choose a taxi instead, because you need to arrive at a meeting looking fresh and composed. One day you have the time to take the bus and daydream your way downtown, and the next it doesn’t get you nearly close enough to where you need to be.

You know this, I know this. So if we’re going to propose any technical intervention at all, it had better be something that builds on our native nous for the city, not overwrites it with autistic AI.

And before we can even begin to speak credibly of integrated mobility services, we’d need to see existing systems display some awareness of the plenitude of alternatives travelers have available to them, some understanding of all the different real-time factors likely to influence journey planning.

To take the most basic example, journey planning for walkers requires a different kind of thinking about the city than, particularly, turn-by-turn directions for drivers. This isn’t simply for the obvious reasons, like car-centric routings that represent a neighborhood as a an impenetrable thicket, a maze of one-way streets all alike, that a walker would stroll on through placidly and unconcernedly.

It’s because, as thinkers from Reyner Banham to Jane Jacobs and Kevin Lynch to Ivan Illich have reminded us — and as anyone who’s ever ridden in a car already understands quite perfectly well — velocity is something like destiny. You simply attend to different cues as a walker than you do as a driver, you notice textures of a different gauge, different things matter. And of course the same thing is true for cyclists vis à vis both walkers and drivers.

Over the past eighteen months, I’ve finally seen some first sentinel signs of this recognition trickle into consumer-grade interactive systems, but we’ve still got a long, long way to go.

Musique concrète

A final step would be to design the built environment itself, the ground against which all journeys transpire, to accommodate transmobility. Why wouldn’t you, at least, plan and design buildings, street furniture and other hard infrastructure so they account for the fact of networked mobility services — both in terms of the hardware that underwrites their provision, and of the potential for variability, dynamism, and open-endedness they bring to the built landscape?

In other words: why shouldn’t a bus shelter be designed with a mobile application in mind, and vice versa? Why shouldn’t both be planned so as to take into account the vehicles and embedded sensors connected to the same network? When are we finally going to take this word “network” at face value?

Of course these technologies change — over time they get lighter, more powerful, cheaper. That’s why you design things to be easy-access, easily extensible, as modular as can be: so you can swap out the CAT5 cable and spool in CAT6 (or replace it with a WiMax transponder, or whatever). Nobody’s recommending that we ought to be hard-wiring the precise state of the art as it existed last Tuesday morning into our urban infrastructure. But anyone in a position of power who, going forward, greenlights the development of such infrastructures without ensuring their ability to accommodate networked digital interaction ought to be called to account by constituents at the very next opportunity.

You know I believe that we used to call “ubiquitous computing” is now, simply, real life. Anybody who cares about cities and the people who live in them can no longer afford to treat pervasively networked informatic systems as a novelty, or even a point of municipal distinction. It’s always hard to estimate and account for, let alone attach precise dollar figures to, missed opportunities, to count the spectral fruits of paths not taken. But given how intimate the relationship between an individual’s ability to get around and a region’s economic competitiveness is known to be, there is no excuse for not pursuing advantage through the adroit use of networked systems to enhance individual and collective mobility.

What we ought to be designing are systems that allow people to compose coherent journeys, working from whatever parameters make most sense to them. We need to be asking ourselves how movement through urban space will express itself (and be experienced as travelers as a cohesive experience) across the various modes, nodes and couplings that will necessarily be involved.

The challenge before us remains integrating this tangle of pressures, constraints, opportunities and affordances into coherent user-facing propositions, ones that would offer people smoother, more flexible, more graceful and more finely-grained control over their movements through urban space. Then we could, perhaps, begin to speak of a true transmobility.

Transmobility, part I

This is a quickish post on a big and important topic, so I’d caution you against taking any of the following too terribly seriously. Blogging is generally how I best think things through, though, so I’d be grateful if you’d bear with me as I work out just what it is I mean to say.

In the Elements talk I’ve been giving for the past year or so, I make a series of concatenated assertions about the near-future evolution of urban mobility in the presence of networked informatics. What I see happening is that as the prominence in our lives of vehicles as objects is for most of us eclipsed by an understanding of them as networked services, as the necessity of vehicular ownership as a way to guarantee access yields to on-demand use, our whole conception of modal transportation will tend to soften into a more general field condition I think of as transmobility.

As I imagine it, transmobility would offer us a quality of lightness and effortlessness that’s manifestly missing from most contemporary urban journeys, without sacrificing opportunities for serendipity, unpressured exploration or the simple enjoyment of journey-as-destination. You’d be freer to focus on the things you actually wanted to spend your time, energy and attention on, in other words, while concerns about the constraints of particular modes of travel would tend to drop away.

When I think of how best to evoke these qualities in less abstract terms, two memories come to mind: a simple coincidence in timing I noticed here in Helsinki not two weeks ago, and a more richly braided interaction I watched unfold over a slightly longer interval during a trip to Barcelona last year.

The first was something that happened as I was saying goodbye to a friend after meeting up for an afterwork beer the other day. It was really just a nicely giftwrapped version of something I’m sure happens ten thousand times a day, in cities across the planet: we shook hands and went our separate ways at the precise moment a tram glided to a stop in front of the bar, and I had to laugh as he stepped onto it without missing a beat and was borne smoothly away.

A whole lot of factors in space and time needed to come into momentary alignment for this to happen, from the dwell time and low step-up height of the tram itself to the rudimentary physical denotation of the tram stop and the precise angle at which the bar’s doorway confronted the street. Admittedly, service and interaction designers will generally only be able to speak to some of these issues. But what if we could design mobility systems, and our interfaces to them, to afford more sequences like this, more of the time?

The second image I keep in mind speaks more to the opportunities presented by travel through a densely-textured urban fabric, and how we might imagine a transmobility that allowed us to grasp more of them.

This time, I was lucky enough to capture the moment in a snapshot: the woman on the bicycle casually rode up to the doorway, casually engaged a friend in conversation, casually kissed her on the cheek and casually pedaled away. The entire interaction, from start to end, may have taken two minutes, and the whole encounter was wrapped with an ineffable quality of grace, as if we’d stumbled across some Gibsonian team of stealth imagineers framing a high-gloss advertisement for the Mediterranean lifestyle.

Again, the quality I so admired was enabled by the subtle synchromesh of many specific and otherwise unrelated design decisions: decisions about the width of the street and its edge condition, about the placement of the doorway and the size of the bike wheels. But it also had a great deal to do with the inherent strengths of the individual bicycle as a mode of conveyance, strengths shared with skateboards, scooters and one’s own feet — among them that the rider has an relatively fine degree of control over micro-positioning and -routing, and that she alone decides when to punctuate a trip with stops and starts.

Watching what happened spontaneously when people were afforded this degree of flexibility made it clear to me that this, too, was a quality you’d want to capture in any prospective urban mobility system. And that to whatever extent we possibly could, we ought to be conceiving of such systems so they would afford their users just such moments of grace.

So on the one hand, we have just-in-time provisioning of mobility, via whatever mode happens to be closest at hand (or is otherwise most congenial, given the demands of the moment). On the other, a sense that any given journey can be unfolded fractally, unlocking an infinitude of potential experiences strung along its length like pearls. It’s not hard to see that these desires produce, at the very least, a strong tension between them, and that we’ll have to be particularly artful in providing for both simultaneously.

How might we balance all of these contradictory demands, in designing networked mobility systems that represent urban space and the challenge of getting through it in terms human beings can relate to? This question brings us to something we’ve discussed here before — the classically Weiserian notion of “beautiful seams” — and it’s a topic we’ll take up in Part II of our series on transmobility.

Neopanoptical

We’re all familiar with the Panopticon, right? The notional prison devised by the eighteenth-century English utilitarian Jeremy Bentham?

No? OK, let me gloss it for you, and people for whom this is a familiar story will forgive me and, I’m sure, point out my mistakes of fact, emphasis or interpretation.

Bentham imagined a prison built in the form of a gigantic ring, with cells by their hundreds disposed around its inner wall. In the very middle of the structure’s central void stood the prison’s sole watchtower, atop which he placed a guard shack with 360-degree visibility.

How to maintain control over the prisoners with but a single tower and a relatively small cadre of guards? For all its formal ingenuity, Bentham’s real innovation was this: the cells lining the periphery were to be brightly illuminated at all times, while the guard tower itself was never lit. The guards were therefore free to observe activity in any cell, at any moment…while the contrast between their brightly-lit cells and the watchtower’s mute windows meant prisoners could never be certain if the guards were observing them, someone else or no one at all. (In principle, the prison administration could go a step further and achieve the same docilizing results without even staffing the tower. How would the inmates even know? After all, they were, and would remain, literally in the dark.)

And there was one final visibility-related wrinkle. The prison would be sited on a hill just outside of town, always there as a vivid reminder that any trespass of the social order would come at a price.

Bentham called his device the Panopticon, and the twentieth-century philosopher of power Michel Foucault famously used it as a jumping-off point for his own dissection of the ways surveillance, visibility and discipline work in contemporary society. One of Foucault’s arguments was that over time, this internalization becomes an entirely unconscious process, that we carry disciplinarity into the ways we move, speak, act and hold our bodies.

We can see this at work on the most literal level in the way we react to the presence of surveillance cameras. An ordinary CCTV camera’s gaze is directional. It sees you, but you see it seeing you. And should you be interested in evading its gaze, you’re free to tailor your actions accordingly.

As Anna Minton notes, though, in last year’s invaluable Ground Control, the simplest possible material intervention — housing the selfsame camera under an opaque polycarbonate dome, costing at the very most a few tens of dollars — achieves precisely the same innovation as that Bentham placed at the heart of Panopticon. Once the mechanism itself is screened by the dome, anything you do in the 360-degree field around it is potentially in its field of vision. You’re no longer quite certain whether you’re actually under surveillance at any given moment — in fact, there needn’t even be a functioning camera under the dome at all — but are in the interests of prudence forced to assume that you are. You’re compelled to internalize the sense that you’re being watched.

Domes are cheaper than cameras, but of course signs are that much cheaper still; I often suspect that the big yellow notice warning me that I’m under CCTV surveillance is unaccompanied by any actual gear to speak of. What could possibly be a more effective deterrent than the watcher that can’t be seen at all?

What’s the harm in all of this neopanopticism? While there have been cases in which this latent apparatus of control has proved decisive in bringing criminals to justice, or at the very least provided us with a few moments of lulzy fun, longer-term statistical analysis paints a different picture. London’s Metropolitan Police admits that CCTV imagery was used in the resolution of less than four out of every hundred crimes. All that watchfulness may be having some effect on behavior, but it sure isn’t buying the public any particular increment of personal safety.

Minton points out that long-cherished civil liberties may not be the only thing being damaged by the presence of CCTV. She compares Britain with CCTV-free Denmark, and from her review of the available data concludes that pervasive surveillance is actually counterproductive. (The conjectured causative mechanism: because people feel that the implicit presence of supervisory authority makes someone else responsible for dealing with crime, they tune out the incidents they witness, or otherwise choose not to intervene.)

In practice, technologies like CCTV surveillance are always exceedingly difficult to weigh in the balance, the more so when technical developments like doming change the envelope of affordances and constraints in which they operate. The complications are redoubled when those of us who are concerned with public space can only wield dry abstractions like “civil liberties” against hot-button appeals and the human reality of victimization. In this light, it’s not unreasonable to argue that some loss of anonymity is acceptable if it meant the capture and punishment of muggers and rapists and hit-and-run drivers. (I wouldn’t happen to agree with you, personally, but it’s not an outright ridiculous belief to hold.)

But we should be very clear that that’s the trade-off we’re being offered. Furthermore, proponents of technologies like CCTV should also be conversant with — and forthright about — the potential for mission creep inherent in them. Systems already deployed are turned toward unforeseen uses; frameworks we already recognize (and therefore, we reckon, understand sufficiently well) are endowed with entirely new potential as easily as you’d blow new firmware into your phone or digital camera. And this happens every day: when we were in Wellington, for example, we were told that the surveillance cameras that voters approved to help manage traffic congestion had been repurposed for crime prevention, without a corresponding degree of public consultation.

Let the image stream coming off of them be provided with a facial-recognition algorithm, and you’ve got an entirely different kind of system on your hands, with entirely different potentials and vastly expanded implications. Yet the cameras, domed or otherwise, look no different from one day to the next. How are people supposed to inform themselves, or avail themselves of their existing prerogatives, under such circumstances?

And all of this is still confining our discussion to the visual realm! Yet the real relevance of this neopanoptical drift will only become obvious to most of us as more data is gathered passively in public space, through location-aware devices, embedded sensors and machine inference built on them. It’s these developments which will, as I’ve argued elsewhere, “permanently redefin[e] surveillance,” and it’s these that I’m more worried about than any simple plastic dome. If we don’t get a collective handle on what disciplinary observation means for our polities and places now, we’ll be in genuine trouble when that observation gets infinitely more distributed and harder to see.

Frameworks for citizen responsiveness, enhanced: Toward a read/write urbanism

We’ve been talking a little bit about what we might gain if we begin to conceive of cities, for some limited purposes anyway, as software under active development. So far, we’ve largely positioned such tools as a backstop against the inevitable defaults, breakdowns and ruptures that municipal services are heir to: a way to ensure that when failures arise, they’ll get identified as quickly as possible, assessed as to severity, brought to the attention of the relevant agencies, and flagged for follow-up.

And as useful, and even inspiring, as this might be, to my mind it doesn’t go nearly far enough. It’s essentially the lamination together of some entirely conventional systems, provisions and practices — something that already exists in its component pieces, something, as Bruce points out here, that’s “not even impossible.”

But what if we did take a single step further out? What if we imagined that the citizen-responsiveness system we’ve designed lives in a dense mesh of active, communicating public objects? Then the framework we’ve already deployed becomes something very different. To use another metaphor from the world of information technology, it begins to look a whole lot like an operating system for cities.

Provided that, we can treat the things we encounter in urban environments as system resources, rather than a mute collection of disarticulated buildings, vehicles, sewers and sidewalks. One prospect that seems fairly straightforward is letting these resources report on their own status. Information about failures would propagate not merely to other objects on the network but reach you and me as well, in terms we can relate to, via the provisions we’ve made for issue-tracking.

And because our own human senses are still so much better at spotting emergent situations than their machinic counterparts, and will probably be for quite some time yet to come, there’s no reason to leave this all up to automation. The interface would have to be thoughtfully and carefully designed to account for the inevitable bored teenagers, drunks, and randomly questing fingers of four-year-olds, but what I have in mind is something like, “Tap here to report a problem with this bus shelter.”

In order for anything like this scheme to work, public objects would need to have a few core qualities, qualities I’ve often described as making them “addressable, queryable, and even potentially scriptable.” What does this mean?

- Addressability. In order to bring urban environments fully into the networked fold, we would first need to endow each of the discrete things we’ve defined as public objects with its own unique identifier, or address. It’s an ideal application for IPv6, the next-generation Internet Protocol, which I described in Everyware as opening up truly abyssal reaches of address space. Despite the necessity of reserving nigh-endless blocks of potentially valid addresses for housekeeping, IPv6 still offers us a ludicrous freedom in this regard; we could quite literally assign every cobblestone, traffic light and street sign on the planet a few million addresses.

It’s true that this is overkill if all you need is a unique identifier. If all you’re looking to do is specify the east-facing traffic signal at the northeast corner of 34th Street and Lexington Avenue, you can do that right now, with barcodes or RFID tags or what-have-you. You only need to resort to IPv6 addressability if your intention is to turn such objects into active network nodes. But as I’ve argued in other contexts, the cost of doing this is so low that any potential future ROI whatsoever justifies the effort.

- Queryability. Once you’ve got some method of reliably identifying things and distinguishing them from others, a sensitively-designed API allows us to pull information off of them in a meaningful, structured way, either making use of that information ourselves or passing it on to other systems and services.

We’ve so far confined our discussion to things in the public domain, but by defining open interoperability standards (and mandating the creation of a critical mass of compliant objects), the hope is that people will add resources they own and control to the network, too. This would offer incredibly finely-grained, near-realtime reads on the state of a city and the events unfolding there. Not merely, in other words, to report that this restaurant is open, but which seats at which tables are occupied, and for how long this has been the case; not merely where a private vehicle charging station is, but how long the current waits are.

Mark my words: given only the proper tools, and especially a well-designed software development kit, people will build the most incredible ecology of bespoke services on data like this. If you’re impressed by the sudden blossoming of iPhone apps, wait until you see what people come up with when they can query stadium parking lots and weather stations and bike racks and reservoir levels and wait times at the TKTS stand. You get the idea. (Some of these tools already exist: take a look at Pachube, for example.)

- And finally scriptability, by which I mean the ability to push instructions back to connected resources. This is obviously a delicate matter: depending on the object in question, it’s not always going to be appropriate or desirable to offer open scriptability. You probably want to give emergency-services vehicles the ability to override traffic signals, in other words, but not the spotty kid in the riced-out WRX. It’s also undeniable that connecting pieces of critical infrastructure to an open network increases the system’s overall vulnerability — what hackers call its “attack surface” — many, many times. If every exit is an entrance somewhere else, every aperture through which the network speaks itself is also a way in.

We should all be very clear, right up front, that this is a nontrivial risk. I’ll make it explicit: any such scheme as the one sketched out here presents the specter of warfare by cybersabotage, stealthy infrastructure attrition or subversion, and the depredations of random Saturday-night griefers. It’s also true that connected systems are vulnerable to cascading failures in ways non-coupled systems cannot ever be. Yes, yes and yes. It’s my argument that over anything but the very shortest term, the advantages to be derived from so doing will outweigh the drawbacks and occasional catastrophes — even fatal ones. But as my architect friends say, this is above all something that must be “verified in field,” validated empirically and held up to the most rigorous standards.

What do we get in return for embracing this nontrivial risk? We get a supple, adaptive interface to the urban fabric itself, something that allows us not just to nail down problems, but to identify and exploit opportunities. Armed with that, I can see no upward limit on how creative, vibrant, imaginative and productive twenty-first century urban life can be, even under the horrendous constraints I believe we’re going to face, and are perhaps already beginning to get a taste of.

Stolidly useful, “sustainable,” justifiable on the most gimlet-eyed considerations of ROI, environmental benefit and TCO? Sure. But I think we should be buckling ourselves in, because first and foremost, read/write urbanism is going to be a blast.

Follow

Get every new post delivered to your Inbox.

Join 1,052 other followers