This is admittedly minor, but I find it rather telling. At the moment, I’m doing some research on the so-called “sharing economy” for my book, and in particular am digging into the background of the travesty that ensued when the founders of the Couchsurfing hospitality-exchange network chose to pivot it from something built on purely voluntary participation into a for-profit enterprise.
I hadn’t been to the Couchsurfing website itself for quite awhile — as in, the last time I visited, it was a .org. So when I first loaded it this time around, I was looking at it with fresh eyes. And maybe that’s why all of the images on the page that are ostensibly of satisfied Couchsurfers registered so oddly to me. You really can’t help but notice that, for self-submitted pictures of people from all over the world — and, at that, members of a site dedicated to free hospitality exchange — they seem unusually straightforward, consistent and professional in their composition and lighting.
Put more directly, they look like commercial stock photography. And that isn’t what you’d necessarily expect from a platform that theoretically prides itself on the strength and genuineness of the peer-to-peer relationships it enables. A few years ago, I would have had to wonder whether these images did in fact represent happy Couchsurfers; now, of course, we have Google Image Search. It only took me a few seconds’ clicking around to confirm what I had suspected — or actually, something even more troubling.
It’s not merely that these are not at all images of actual Couchsurfers; in itself, that might readily enough be forgiven. It’s that the images appear to have been downloaded, altered and used in a commercial context without their creators’ knowledge or consent — in one case, in fact, in direct contravention of the (very generous) terms of the license under which they were offered.
Here, let’s take a look:
– The image labeled “Jason” is one of photographer David Weir’s 100 Strangers, originally labeled with a copyright notice;
– “Dang” is a crop of commercial photographer Anthony Mongiello’s headshot of actor Stanley Wong;
– “Sonja” and “Gérard” are two of Chris Zerbes’ Stranger Portraits. While Chris does make his photos available under a Creative Commons license, they are clearly labeled that such use must not be commercial, that it must be attributed to him, and that no derivatives may be made from the original image. All of those provisions are violated here.
It’s bad enough that Couchsurfing would choose to use stock photography, when imagery of actual site members would tell a much more compelling story. But that they’ve chosen to gank images from hard-working photographers, to do so for commercial gain, without even a gesture at attribution? To me, that says a great deal about just what kind of “sharing” we mean when we talk about a sharing economy.
UPDATED: Couchsurfing has since removed the images in question, without otherwise acknowledging this post or my other attempts to communicate with them. For the record, such as it is, I enclose screenshots of the page as it previously appeared.
If you’ve been reading this blog for any particular length of time, or have tripped across my writing on the Urbanscale site or elsewhere, you’ve probably noticed that I generally insist on discussing the ostensible benefits of urban technology at an unusually granular level. (In fact, I did this just yesterday, in my responses to questions put to me by Korea’s architectural magazine SPACE.) I’ll want to talk about specific locales, devices, instances and deployments, that is, rather than immediately hopping on board with the wide-eyed enthusiasm for generic technical “innovation” in cities that seems near-universal at our moment in history.
My point in doing so is that we can’t really fairly assess a value proposition, or understand the precise nature of the trade-offs bound up in a given deployment of technology, until we see what people make of it in the wild, in a specific locale. The canonical example of the perils that attend the overly generic consideration of a technology is bus rapid transit, or BRT, which works very, very well indeed on sociophysical terrain that strongly resembles its original home of Curitiba, and much less so in low-density environments like Johannesburg, or in places where, for whatever reason, access to the right-of-way can’t be controlled, notably Delhi and New York City. BRT was sold to these latter municipalities as a panacea for problems of urban mobility, without reference to all of the spatial, social, regulatory, pricing-model and service-design elements that had to be brought into balance before anything like success could be declared, and it shows. (Boy howdy, does it show. Have you ridden the New York City MTA’s half-assed instantiation of BRT lately?)
And if anything, information technology is even more sensitively dependent on factors like these. The choice of one touchscreen technology (form factor, operating system, service provider, register of language…) over another very often turns out to determine the success or failure of a given proposition.
But despite all this, sometimes it is possible for the careful observer to suss out the likely future contours of a technology’s adoption, based on a more general appreciation of its nature. And that’s why I want to take a little time today to discuss with you my thinking around the emergent class of low-power, low-range transmitters known as “beacons.”
Classically, of course, a “beacon” was a visually prominent effect of some sort, designed to notify or warn those encountering it of some otherwise indistinct condition or feature in the landscape. And perhaps as originally envisioned, this class of transmitters genuinely was supposed to be what it said on the tin: a simple way for relatively low-powered devices to find and lock onto one another, amid the fog and unpredictable dynamism of the everyday.
This is not a particularly new idea; as long ago as 2005, I’d proposed on my old v-2 site that networked objects would need some lightweight, low-cost way of radiating information about their presence and capabilities to other things (and by extension, people) in the near neighborhood — the foundation of what, at that time, I thought of as a “universal service-discovery layer” draped over the world. And of course I was nowhere near the first to have proposed something along these lines; I myself had been inspired to think more deeply about things talking to each other from a sideways reading of a throw-away bit of cleverness in Bruce Sterling’s 1998 novel Distraction, and it’s fair to say that the idea of things automatically broadcasting their identity to other things had been in the air for quite a few years before that.
But in evolving commercial parlance, beacons are nothing of the sort, really. A contemporary beacon (like these ugly and rather hostile-looking blebs, sold by Estimote) is primarily designed to capture information, not to convey it — and such information as it does convey outward is disproportionately intended to benefit the sender over the recipient. So my first objection to beacon technology is that this very framing is in itself mendacious, dishonest and misleading. (You know you’re in trouble when the very name of something is a lie.)
As things stand now, beacons are intended for one purpose, and one purpose alone: to capture and monetize your behavior. As with the so-called Internet of Things more broadly, there simply aren’t any particularly convincing or compelling use cases for the technology that aren’t about driving needless consumption; almost without exception, those that are even partially robust have to do with closing a commercial transaction. Both the language of beacon technology and the framework of assumptions it grows out of are airlessly, claustrophobically hegemonic, and this thinking is all over their sites: vendors urge you to deploy these “media-rich banner ads for the physical world” in “any physical place, such as your retail store,” to “drive engagement,” “cross-sell and up-sell” and eventually “convert” passersby to purchasers. Even beacon advocates have a hard time coming up with any more than half-hearted art projects by way of uses for the technology that are not founded in the desire to relieve some passing mark of the contents of their wallet, reliably, predictably and on an ongoing basis.
And even those scenarios of use which appear at first blush to be founded in blamelessly humanitarian ends, when subjected to trial by ordeal ultimately turn out to embrace the shabbiest neoliberal reasoning. Cheaper to spackle a subway station with networked microlocation transponders, goes the thinking, than to actually hire and train the (unpredictable, and damnably needy) human beings that might help riders navigate the corridors and interchange nodes. Even if the devices don’t actually turn out to work all that reliably in the fullness of time, or impose a starkly higher TCO than initially estimated, there will be a concrete deployment that someone can point to as an accomplishment, a ticked-off achievement and a justification for renewed budgetary allocation or re-election.
Finally, I find it noteworthy that the beacon cost-benefit proposition can only subsist when it is accomplished stealthily, and when it is presented to citizens forthrightly and transparently, it is just as forthrightly rejected. Perhaps it’s a temporary blip of post-Snowden reticence, but my sense is that most of us have become chary of bundling too many performative dimensions of our identity onto our converged devices at once, and not at all without reason. (Ultimately, I diagnose similar reasons underneath the failure to date of digital wallets and similar device-based payment solutions to gain any market traction whatsoever, though there are other questions at play there as well.)
Beyond and back
The interest in beacons strikes me as being symptomatic of something deeper and more troubling in the culture of technology, something I think of as “the Engelbart overshoot.”
There was a powerful dream that sustained (and not incidentally, justified) half a century’s inquiry into the possibilities of information technology, from Vannevar Bush to Doug Engelbart straight through to Mark Weiser. This was the dream of augmenting the individual human being with instantaneous access to all knowledge, from wherever in the world he or she happened to be standing at any given moment. As toweringly, preposterously ambitious as that goal seems when stated so baldly, it’s hard to conclude anything but that we actually did achieve that dream some time ago, at least as a robust technical proof of concept.
We achieved that dream, and immediately set about betraying it. We betrayed it by shrouding the knowledge it was founded on in bullshit IP law, and by insisting that every interaction with it be pushed through some set of mostly invidious business logic. We betrayed it by building our otherwise astoundingly liberatory propositions around walled gardens and proprietary standards, by putting the prerogatives of rent-seeking ahead of any move to fertilize and renew the commons, and by tolerating the infestation of our informational ecology with vile, value-destroying parasites. These days technical innovators seem more likely to be lauded for devising new ways to harness and exploit people’s life energy for private gain than for the inverse.
In fact, you and I now draw breath in a post-utopian world — a world where the tide of technical idealism has long receded from its high-water mark, where it’s a matter of course to suggest that we must attach (someone’s) networked sensors to our bodies in order to know them, and where, rather astonishingly, it is possible for an intelligent person to argue that spamming the globe with such devices is somehow a precondition of “reclaim[ing our] environment as a place of sociability and creativity.” And this is the world in which beacons and the cause of advocacy for them arise.
There’s very little meaningful for this technology to do — no specifiable aim or goal that genuinely seems to require its deployment, which could not be achieved as or more readily in some other way. As presently constituted, anyway, it doesn’t serve the great dream of aiding us in our lifelong effort to make sense of the endlessly confounding and occasionally dangerous world. It furthers only the puniest and most shaming of ambitions. To the talented, technically capable folks working so hard to build out the beacon world, I ask: Is this really what you want to spend any part of your only life on Earth working to develop? To those advocating this turn, I ask: Can’t you think of any way of relating to people more interesting and productive than trying to sell them something they neither want nor need, and most likely cannot genuinely afford?
It doesn’t take too concerted an intellectual effort to understand what’s really going on with beacons — as a matter of fact, as we’ve seen, most people evidently seem to understand the situation perfectly well already. But I don’t hold out too much hope of getting any of the truly convinced to see the light on this question; we all know how very difficult it can be to get people to understand something when their salary (mortgage payments/kids’ private-school tuition/equity stake/deal flow) depends on them not understanding it. If you ask me, though, we were meant for better things than this.
The subject of this post may be rather obscure, particularly for those of you who are not from the United States, or do not pay attention to American political media. I hope you’ll excuse me, though, because I think it’s important to examine some of the ways that claims on behalf of the corporate use of information technology are normalized and made to seem natural by their treatment in the media.
My concerns here focus on Talking Points Memo, a political blog whose tendency, I think, it would be fair to describe as center-left by US standards (and center-right by those generally obtaining elsewhere). Over the past year or so, under the leadership of site founder and editor Joshua Marshall, TPM has been seeking to broaden its coverage beyond the party-political, with the clear ambition of supplanting brands like the dying Newsweek as a trusted general-news outlet. The site continues to position itself as “the premier digital native political news organization in the United States,” but I’m willing to bet that “political” isn’t destined to remain there forever. This is a site with its eye on the main chance.
Part and parcel of this effort has been a significant expansion into science and technology reportage, both handled by a TPM staffer named Carl Franzen. Ordinarily, I would welcome a political site — especially one as associated with the notion of rigorously-vetted crowdsourced investigative journalism as TPM — taking on the responsibility of covering a topic as salient to our choices in everyday life as emergent technology, but what I’ve seen so far doesn’t begin to measure up to my expectations.
In fact, it’s hard to how overstate how disappointed I am with the quality of TPM’s technology coverage. In most articles appearing under Franzen’s byline, you’ll note, the content of a press release or a sympathetic interview is transcribed word for word into the TPM post, lending the site’s imprimatur to whatever claims that are being made by the article’s subject. At no time does Franzen appear to challenge what he’s being told, seek any other informed perspective, or simply attempt to validate a proffered representation as factually accurate.
The most recent example of Franzen’s credulity is an almost perfectly ahistorical post accepting Google’s claim that their prototype Field Trip app somehow constitutes an example of “ubiquitous computing”; indeed, the piece comes perilously close to crediting Google with inventing ubiquitous computing in the first place. (And yes, those of you familiar with the ubicomp discourse will not in the slightest be surprised to learn that in among the hype recapitulated by Franzen is the inevitable claim to offer a “seamless” experience.) Note that Franzen allows Google VP John Hanke 163 words: over half the length of his 299-word post.
Here, in a piece entitled “Cooler Than Facebook” — and how the marketing department must have loved that — Franzen makes a pitch on behalf of Google Plus:
In the near future, social networking may involve navigating a stylishly animated Google Plus on your desktop computer while resting comfortably in a chair a few feet away, using your smartphone as a remote control.
What is this but a unchallenged, unexamined and limpidly transparent paraphrase of a Google team’s own description of their demo? It’s practically Eisenhower-era in its depiction of benevolent corporate forces deployed on behalf of your convenience and comfort. (“Resting comfortably in a chair,” you say? Why, Top Men are working on it even as we speak!)
It’s not just Google that gets this treatment. Here Microsoft “bring[s] the ability to accurately scan 3D objects to the masses,” with their “eye-popping, incredibly detailed” Kinect Fusion offering. And here is a selection of other Franzen pieces that read like press releases: for Barnes & Noble, eBay, Tesla…these, mind you, are just from TPM’s technology coverage over the last sixty days.
I think you may be beginning to sense a pattern here, no? From my perspective, though, the most galling example of Franzen’s work is probably this piece on Control Group, which not merely reads like the kind of flackery you find on PR NewsWire, but does so on behalf of some particularly pernicious claims.
It’s not just that Franzen’s gee-whiz tone is annoying, although it does annoy me. It’s the willingness to carry water for an agenda that would certainly be sinister if it had not been so thoroughly debunked over the past twenty years. Consider this unquestioned statement from Control Group CEO Campbell Hyers:
[I]n a corporate environment, you’d be able to swipe your badge and instantly have a conference room itself invite all of the right participants to the meeting and bring up the right slides on a projector screen and then log the whole conference as an audiovisual file later.
A more knowledgeable reporter would have spotted that Hyers’s pitch, far from being futuristic, is actually a string of clichés reaching straight back to Mark Weiser‘s 1990s tenure at PARC (and, at that, long problematized). This knowledge is somewhat arcane, of course, and it may not be particularly realistic to expect a cub reporter to have immersed him- or herself in the detailed history of the field being covered. But surely a more diligent reporter might have reached out to known sources of insight in that field, and attempted to vet the essential contours of the story he or she was being told. And that’s without touching the airless, hegemonic notion that conference rooms and employee identity badges and PowerPoint presentations are the natural order of things.
Franzen manages to accept at face value all of the claims made about the company’s putative “operating systems for physical space,” in a way that’s curiously at odds with TPM’s ostensible progressive agenda. (In fairness, the problems with Franzen’s coverage precede his arrival at TPM. Here’s an older, similarly breathless piece he contributed to Atlantic Wire.)
And it’s just that tension — between the latent logic of so many of these pieces and anything we might fairly think of as progressive politics — that prompts me to write this. I don’t pay much attention to the gadget-oriented technology blogs, with their pong of adolescent-male wish fulfillment, and I certainly can’t abide the Valley-centric tech industry coverage of other “technology” sites. But I don’t expect insight or critique from either of these directions — in fact, I’d be foolish to do so. By contrast, I surely do expect it from a site that not only, in every other realm in which it operates, upholds the honorable tradition of investigative journalism, but clearly does so in the name of a particular kind of politics.
I’m not asking that Talking Points Memo transform itself into, say, the New Left Review. But questioning the logic of the arguments that are made before the public, seeking alternative perspectives: these functions are both core to TPM’s mission, and key to the value it represents itself as providing to its audience. Lending its hard-won imprimatur to transparent PR and marketing tripe — on not a few occasions, again, literally word for word — not merely does not establish any new domain of credibility, it undermines whatever reputation for independence and quality the site currently enjoys. Franzen and, by extension, Marshall’s site are getting played. They’re being used. They would resent it, howlingly, from a corrupt Congressman or a racist sheriff, and they ought to resent it every bit as much from corporate flacks and clueless technoutopians.
What’s worse is that, given contemporary habits in media consumption, it is not at all unlikely that Franzen’s is the only coverage of the technology sector TPM’s core audience will be exposed to. TPM’s embrace of his work could all too easily lead otherwise-sophisticated readers to believe that viewpoints like the ones expressed in Carl Franzen’s writing are fully normalized and universally agreed-upon — if not, god forbid, the leftmost marker of acceptable opinion. This is precisely how consensus realities are established, how discourse policing works; if “even the left-leaning Talking Points Memo” endorses a point of view, anyone quibbling with it is by definition outside the bounds of the discursive community, and of fair comment. Like any publisher, in other words, Marshall has some responsibility for anticipating how the color of approval his act of publication lends to things is likely to be used, particularly by those ideologically unsympathetic to his other aims.
The old feminist adage reminds us that “the personal is the political,” and it’s precisely the same here: every technology comes with a conception of our role in the world bundled in it. It’s vital, particularly for those of us who think of ourselves as somehow being “on the left,” or in any way working toward a progressive agenda, that we ask how technologies can serve ends inimical to whatever goals we believe are worth the effort. And it’s unquestionably the prerogative of a would-be independent news outlet to apply to ostensibly innovatory products and services some standard of evaluation deeper than whether or not they are “cool.”
My bottom line is that I find the tone, tenor and, most importantly, the content of Franzen’s coverage sharply at odds with the progressive tradition I interpret Talking Points Memo as trying to uphold. I recognize some of the shortfalls in his work as the clear consequence of the intense pressure on an online outlet to publish, on an online writer to make word count. But that pressure doesn’t justify outright stenography. If Talking Points Memo is not willing or able to bring the exact same level of discernment, skepticism and professionalism to their technology coverage that Marshall would demand of any political coverage appearing under the site’s name, perhaps they ought to consider stepping back from the ambition of offering that coverage.
So of course Russell’s spot-on here, about the terrible things that await us as poorly-considered game-like logics are superimposed over everyday life. He never comes right out and says it, but I assume he’s reacting to Jesse Schell‘s recent epiphany about networked life, gaming tropes and the motivational mechanics they afford when brought together, and maybe the recent popularity of Foursquare, with its badges and mayorships.
Schell’s argument (or one of them, anyway) is that the everyday environment is now sufficiently instrumented and internetworked that the psychological triggers and incentives developed by game designers to motivate in-game behavior can be deployed in real life. A poster on MetaFilter puts it in a nutshell: “points for brushing your teeth, doing your homework, eating your cornflakes. Gain levels for riding the bus instead of driving. Net-integrated sensors in every device to keep track of your score and upload them to Facebook or wherever. Tax incentives if you get a good enough score on your kid’s report card or read the right books.”
And this is more than passing scary, because these motivators work. Just as food designers have figured out how to short-circuit our wetware with precisely calibrated doses of fat, salt and sugar, game developers trip the dopamine trigger with internally-consistent, but generally otherwise worthless, symbolic reward systems. That they’ve (knowingly or otherwise) learned how to play this primordial pathway like a piano is attested to by the untold gigahours gamers worldwide spend voluntarily looping out the most arbitrary actions, when most of them presumably have a choice of other pretty swell things they could be doing. Like, y’know, their partners.
What happens when incentive mechanics like this leak out of gamespace and into the world? In the long run it may be for the best that ad agencies remain so densely provisioned with the manifestly unclued, because this way of doing things would be nothing short of terrifying in the hands of someone who knew what they were doing. The short term picture, though, is clearly less reassuring; as Russell puts it, “we’re going to encounter a bunch of crappy sorta-games foisted on us.”
You think he’s jumping the gun, assuming the worst, maybe being a little hyperbolic? Ladies and gentlemen, I give you Exhibit A.
But fortunately, there are other games to be played, much cleverer and more interesting ones. Bruce Sterling offered a lovely vision of networked rewards in the real world in his 1998 short story “Maneki Neko.” The story has dated badly in some ways — in a precise inversion of what came to pass, it’s amusing to see the story’s Japanese wield sleek, protean “pokkekons” while their clunky American counterparts suffer with clunkier Silicon Valley PDAs — but in other ways it’s clear that Bruce had the notion sussed.
His depiction of a sweetly networked gift economy, in particular, makes the Schellian universe look tawdry. “Maneki Neko” would seem to argue that you don’t need “points” and meaningless achievements unlocked to motivate behavior, when enlightened self-interest and the joys of participating in reciprocal agalmics are sufficient.
I think we could all see it coming the moment Schell’s DICE2010 talk went up on the technology blogs. “See”? You could practically smell the agency nation bruising its collective index finger on the mouse key as it raced to scrub through the half-hour video in search of bullet-pointable content for the next morning’s PowerPoint. Russell’s probably being too generous by half: I think we’re in for a Laird Hamilton-sized wave of pointlessness, as too many not-bright-enough parties fall all over themselves trying to enact and deploy incompatible, mutually incoherent Schell-style solutions.
In some ways, it really is too bad. Given that vice is generally its own reward, that they need to be incentivized at all suggests to me that there’s nothing inherently wrong with most of the behaviors such structures are designed to motivate. For that matter, I tend to be favorably inclined toward any incentive system that begins, however tentatively, to jimmy our lives from the grip of the money economy. I just wish fewer people had described Schell’s video enthusiastically, as “the most mindblowing thing I’ve seen all year,” and more as “something potentially troubling, that we need to think carefully about.”
Because the dopaminergic system can be an inhumanly powerful force, beside which all our notions of “will” are laughable, and where it can take a person is not at all pretty. I just don’t like thinking of it as a tool available to someone bent on designing my life for me. And with all due respect, especially not to a community dedicated to the proposition that “reality is broken [and] game designers can fix it.”
That’s a heavy place to wind up, and here I’d intended this post to be both briefer and lighter. But maybe some of these notions could do with a bit of taking seriously.
So it looks like I’ll be in Amsterdam next month to speak at WCIT 2010: the seventeenth annual World Congress on Information Technology, an event whose theme is “Challenges of Change.” (Lot of challenges this year, I guess, and that’s even before your civilizational transportation grid is brought to its knees by the merest grumblings of an Icelandic firegod.)
I am of course delighted to be at WCIT, but I have to say I’m a little perplexed by the relevance of anything I have to say to the track I’ve been assigned, “Creative Industries.” People I have a great deal of respect for have found institutional homes in departments so named, so there must be some there there, but for the life of me I can’t figure out why a rubric so fuzzy and problematic has risen to prominence so quickly.
Actually, I find the recent emphasis on “creative” X, Y and Z more than a little troubling. Part of this is simply a lifelong aversion to flavor-of-the-month thinking and empty jargon, but it’s also that it all seems to be down to the influence of Richard Florida — and in my mind, Florida’s seeming advocacy of things I care about deeply winds up trivializing and ultimately undercutting them.
Methodologically, of course, Florida’s original work leaves a great deal to be desired, so much so that the serious social scientists I know preemptively cringe when they can sense his name about to be uttered. The problems start right off the bat, with Florida’s definition of “creative”; in his hands, the term becomes so elastic as to be effectively meaningless, unless you truly believe that surgeons, hairdressers and cabinetmakers are all responding to the same primary imperatives in their choice of occupation.
But then it’s not clear that even if they did, they would think of themselves as a self-conscious class — i.e. a group with overriding shared or collective interests — at all. The sprawling cohort Florida anoints as creative for the purposes of making his case have so little in common otherwise that it’s hard to ever imagine them constituting a coherent constituency, voting bloc, market or audience.
I also wish somebody would tell me just which fields of human endeavor constitute these supposed “creative industries.” The laundry list of criteria that have been advanced strikes me as more self-congratulatory than diagnostically useful, and just about Borgesian into the bargain.
The error is compounded when some well-meaning effort is made to attract both class and industries to what are now being dubbed “creative cities.” Believe me, I have absolutely no problem if you want to attract creative people to your city, nor would I complain in the slightest if you rigged the machinery of municipal policy so as to render your part of the world that much more welcoming to gay men and bicyclists. We could all use a leisurely ride every once in awhile, and so far as I know no city has ever done anything but make money and have a good time during an International Bear Rendezvous. That is all well and good.
But don’t for a moment make the mistake that by so doing, you’ll automatically become Silicon Valley 2.0, let alone catapult your two-bit burg into the stratum of Sassen-class world cities. Convincing the startups, the venture money, and the young innovators that your part of the world would make a congenial home, in the hopes of cultivating a robust and sustainable tax base, is a perfectly reasonable thing to want to do. But the honest truth is that not every place is or ever will be equally set up to succeed in these things, and anybody who suggests otherwise is selling you a bill of goods.
The cynic (or the realist critic of neoliberalism) points out that investment is attracted by a “stable” local political environment and a docilized labor market contained by business-friendly wage and collective-bargaining laws. The Floridian, ever so slightly more evolved, will argue that sidewalk cafés, plentiful bike parking, and a neighborhood that breaks out in fluttering rainbow bunting come Pride each year are more likely to attract the clean, green twenty-first century investment you’re presumably really looking for. Better to snare Jamba Juice and the Apple Store and the kind of people who shop in them, goes the argument, than Pig Iron Smelting Joint Venture No. 4.
That’s all fine, as far as it goes. But I believe there’s a single factor that makes one or another region more attractive to the kinds of people and investment that apparently now signify above all others — and I’m sorry, Metz, it’s not having a starchitect-designed museum. It’s a factor I think of as organic sense of place.
Amsterdam, Barcelona, San Francisco, New York and London all have persistent local ways of doing and being, and that’s what makes them compelling places to work and settle, despite the inevitable hassles attendant upon doing so. These lifeways obviously evolved over historical time, and the harsh truth we can conclude from this is that there’s no turnkey way to join their ranks, no book you can read or seminar you can attend that can tell you how to be one of them. This has got to be a bitter pill to swallow, I know, if you’re Masdar or Sejong City.
I understand that times are tough, competition between cities is relentless and those of you responsible for making urban-scale decisions are desperately interested to hear from someone, anyone at all, who seems confident about having the answers. I’m simply begging you not to swallow Richard Florida’s ideas whole (or mine, or anybody else’s at all).
If you care about queer lives and two-wheeled transit, by all means take measures to support them. But do so on their own terms, in, of and for themselves, and not because you’re following some pop sociologist’s half-assed recipe for urban renaissance in the hope of luring development. Who knows, maybe a sincere effort at the former will wind up fructifying your town in all kinds of unexpected ways; it’s not as if it’s ever a particularly bad idea to underwrite civilization and amenity.
But if all you care about in the end is the flow of investment, talent and human capital through your town, you can probably save yourself the half-hearted effort at draping yourself with the Creative Industries mantle. There are plenty of other ways to attract capital, and though they’re neither as glamorous nor as generative of the instant cred that goes hand-in-hand with having purchased this year’s model, they work and work reliably.
I’ve never heard anyone accuse Zürich, for example, of having a blistering DJ scene, cutting-edge galleries or forward-leaning popup shops. Yet they seem to be doing OK when it comes to the cheddar, you know? Better a world of places that are what they are, and stand or fall on their own terms, than the big nowhere of ten thousand certified-Creative towns and cities with me-too museums, starchitected event spaces and half-hearted film festivals.
Over the past year, Helsinki has more or less quietly installed large, high-definition Symbicon displays on sidewalk locations around town (on a contract with the deeply regrettable Clear Channel, but that’s another story).
You know I’m at least mildly skeptical about the benefit of street-level screens, but two campaigns (“ads”? “clips”?) I’ve seen over the past few months have convinced me that there’s an emergent practice of programming artfully for them. I don’t know enough to say whether these strategies developed in response to cost or time constraints, as the result of some thoughtful, intentional process, or from something else entirely – in fact, it seems clear that the two examples I’m going to share with you spring from different sets of circumstances – but as far as I’m concerned you can go ahead and file them under “best practices.”
The first time I was impressed by content on Helsinki’s screens was advertising I noticed at the beginning of summer. As my mind’s eye remembers it, anyway, what appeared onscreen was a single image completely duplicating the content of an otherwise entirely conventional and inert poster appearing around town at the same time, with a single, subtle exception: the headline text, and only the headline text (i.e. not any of the other copy) animated in and out.
At first glance, this would seem to be a pretty wasteful use of the potential inherent in full-motion, HD video, but that’s the thing precisely: the first glance led to a second, and a third, in a way that a conventional video ad would not have. Like anything appearing in the banner-ad position atop a Web page, we already know to tune those things out. By contrast, I found the simple text transitions hugely compelling. However they arose, and whatever decisions led to that particular choice, the posters felt restrained and sophisticated, not impoverished: a proper deployment of form for an oversaturated age. I kept thinking, “Here’s that rare someone who has an inkling what to do with these monsters.”
I had the same reaction again the other day. The screens are currently running ads for the Swedish high-street retailer H&M, shot with a high-speed camera – models sloooooowly turning, as a cascade of red leaves ever-so-softly settles over them and to the ground. Just as with the movie posters, I found myself paying the H&M ads an inordinate amount of attention. Because the images’ figural elements evolve so glacially against a stable background, they’d found my cognitive sweet spot, that precise interval at the threshold of visual perception that makes you ask yourself: Wait, did that just change? What part of it? And I minded not at all. (In fact, I found it kind of calming. There’s a word you certainly don’t hear every day in the context of advertising.)
Taken together, I’m beginning to think these two experiences point at something counterintuitive: given the inherent dynamism of most streetscapes – yes, even Helsinki’s – perhaps the most effective presentation strategy for street-level urban media is an embrace of the jnd. By distinct contrast to the other hammeringly unsubtle screens I can think of (Shibuya kosaten, of course, but also that one on the 280 approaching Daly City), the primary mode of which seems to be epileptiform flicker, I’ve wound up disposed reasonably kindly to the displays around here, and thinking of them as an unproblematic addition to the visual environment. I think that’s about the best we can expect at this point.
UPDATE: I’ve uploaded some video of the H&M ads to Flickr so you can see them for yourselves and see what you think.
It’s a terrible word, but maybe a terrible thing deserves one: “responsibilization” refers to an institution disavowing responsibility for some function it used to provide, and displacing that responsibility onto its constituents, customers, or users. Pat O’Malley, in the SAGE Dictionary of Policing, provides as crisp a definition as I’ve found, and it’s worth quoting here in full:
…a term developed in the governmentality literature to refer to the process whereby subjects are rendered individually responsible for a task which previously would have been the duty of another – usually a state agency – or would not have been recognized as a responsibility at all. The process is strongly associated with neoliberal political discourses, where it takes on the implication that the subject being responsibilized [!] has avoided this duty or the responsibility has been taken away from them in the welfare-state era and managed by an expert or government agency.
Of course, it’s not just state agencies. It’s every half-stepping, outsourcing, rightsizing, refocusing-on-our-core-competency business you’ve encountered in these austere days, shedding any process or activity which cannot be reimagined as a profit center. You’ll get the taste of it any time you turn to a Web community to replace the documentation or customer service manufacturers used to provide as a matter of course. More generally, we see the slow spread of attitudes like this reflected in technological artifacts like the femtocells carriers want to sell you to patch the holes in their own network coverage and semiotic artifacts like the signage here, not-so-subtly normalizing the idea that checking in for a flight is something that should be accomplished without recourse to expensive, troublesome human staff.
In both of these cases, a rhetorical sleight-of-hand is deployed to reframe the burden you must now shoulder as an opportunity – to convince you, to trot out once again a phrase that is rapidly outstaying its welcome, that what you are experiencing is a feature and not a bug. And this is the often-unacknowledged downside in the otherwise felicitous turn toward more open-ended product-service ecosystems: the price of that openness is generally increased vigilance and care on the user’s part, or “wrangling.” But there’s a stark difference, as I read it anyway, between knowingly taking on that order of obligation in the name of self-empowerment and improved choice, and having to take it on because the thing you’ve just shelled out a few hundred dollars for is an inert brick if you don’t.
I’m not sure there’s any longterm fix for this tendency in a world bracketed by the needs of institutions driven primarily by analyst calls, quarterly earnings estimates and shareholder fanservice on one flank, and deeply seamful technologies on the other. The pressures all operate in one direction: you’re the one left having to pick up a sandwich before your five-hour flight, figure out what on earth a “self-assigned IP address” means, and help moribund companies “innovate” their way out of a paper bag, for free. So if you manage an organization, of whatever size or kind, that’s in the position of having to do this to your users or customers, you definitely have the zeitgeist defense going for you. But at least have the common decency not to piss on people’s heads and tell them it’s raining.
There’s more on such “boundary shifts” here, and I’ll be writing much more about their consequences for the user experience over the next few months. For now, it’s enough to identify the tendency…and maybe begin to think about a more euphonious name for it, as well.
As most of you know, I pay a decent amount of attention to products offered under the Puma brand. Even when a particular item or line doesn’t quite do it for me – and this happens more and more often with every passing year, presumably because I’m ever more decisively aging out of their target demo – there’s generally something ever so slightly more interesting about the stance and overall aesthetic of the things they sell than those of competitors Adidas and Nike.
Nor should it come as any surprise that I’m going to be especially interested in a line called “Urban Mobility,” which has at various points over the last two years consisted of shoes, baggage, clothing, and even a white-labeled Biomega bike.
In Puma’s conception, urban mobility apparently has to do with affording the wearer free movement of the body, protecting him or her against inclement conditions, and offering plenty of pockets. These are not clothes for sitting in cars, riding on buses, or waiting on subway platforms, in other words; apparently, getting around the city is something that must be negotiated parkour-style, in the remorseless arena of the physical, unaided by anything infrastructural.
I’m not necessary put out by the fact that the line invests the act of getting around the city with a glamour entirely missing from most of the actual, everyday transactions involved – after all, isn’t that kind of the point of fashion? Nor am I even that surprised by the relative functional underperformance of the garments and luggage, their elevation of (nice-ish) typography and silly posturing over any real utility. (Though if you’re going to do “urban mobility,” you might as well do it.)
No, the biggest disappointment to me in all of this, by far, is that not a single one of the artifacts included in the Urban Mobility line partakes of or refers to the networked information real-world city mobility is increasingly built upon. It’s not just a question of Puma being a maker of stuff, not services; remember, even the abortive Trainaway offering included online and audio components. It’s a failure of imagination and understanding.
At the very least, how hard would it have been to gin up an Urban Mobility iPhone app? I mean, sure, it’s the kind of flavor-of-the-month thing I generally decry, an initative which would at first blush appear heir to all the sad-ass metooism of most such marketing efforts. But in this case there would at least be some logic and justification underwriting the effort, considering that urban mobility is manifestly what people do with these devices.
I know, I know: I’m being too literal. I’m failing to grasp that concern for function is too often the death of fantasy. More importantly, I’m failing to account for the fact that the whole collection is past its sell-by date (and doesn’t seem to have done that well to begin with). I’m showing my age, my lack of edge, whatever. Mark my words, though: such efforts are going to feel increasingly weak and incomplete without a networked component of some type, and the more so the greater the degree to which the posture subtends a domain in which the informatic is primary.
Whether they’re entirely conscious of it or not, technodeterminists of various stripes love to invoke The Hunchback of Notre Dame in explaining the impact of emergent media on the world around us. “This will kill that,” moans Hugo’s miserable archdeacon Claude. “The press will kill the church; printing will kill architecture.”
It’s that kill that really sells the line, and moors it in memory: so dramatic, so decisive, so brutal. And so we’re told that the telephone kills the written word, that video kills the radio star, that the recordable audio cassette kills the recording industry. U.s.w., u.s.w., u.s.w.
But radio didn’t die, not right away, just like email hasn’t (yet) killed the Postal Service and the Kindle hasn’t entirely done away with the printed book. These are entirely different kinds of propositions, serving different populations and different purposes through different apertures.
At the same time, though, you’d have to be blind not to notice the shifting of their relative fortunes in the world. How to account for these shifts more accurately, less reductively, less like a douchey futurist would?
I’m beginning to think of the set of interfaces through which we engage meaning and interact with the wider social world as a mediating stack, with distinct many-to-one, one-to-one and one-to-many layers. The precise composition of this stack is going to be different for each of us, varying widely by where we live, how much time, money and effort we can afford to spend on its composition and maintenance, and (especially) when we came of age. So where my grandmother used radio, TV, newspapers, phone calls and written letters to bind her world together, I tend to use the Web, email and IM. And – here the technology really does tell – where she didn’t have access to a one-to-many channel at all, I have WordPress, Twitter, and (in edge cases) a variety of burst-email and -SMS options available to me.
The important thing is this: the grandeur always lives at the top of the stack. Right now, it’s vested in “social media,” just as it was in blogging ten (!) years ago, in television forty years ago and in newspapers sixty years before that. What each new media technology does do is knock away one or more of the social and economic props on which the success (and ultimately, the viability) of other channels in its layer depend. With the introduction and mass adoption of anything new, those channels move further down the stack. They become less central to the production of consensus culture, more a niche proposition, almost certainly less glamorous. But if a given way of doing things offers something that no other mediating technology can – whether for reasons of exceedingly low cost, low barriers to entry, or robust simplicity – it will never disappear entirely.
What we’re seeing right now with newspapers, I think, is simply that they may be dropping off the bottom of the stack. The struts of their justification have been eroded in too many different ways, from too many different directions. Newspapers are a threefold proposition – they inform, aggregate eyeballs for the benefit of advertisers, and furnish the container in which a shared civic community can be seen to form – and each of these value propositions has now been near-fatally undermined by some other channel. The rising price of pulp and delivery fleets is merely a convenient excuse to pull the plug.
So some given That may indeed about to be killed, after all, but not by This – not, in other words, as any Hugoesque single-bullet theory would have it. It’s more like the achingly protracted death of a thousand cuts, inflicted from near as many different directions, and only because everything That could offer was already being done and done better by a swarm of other things. The distinction may appear trivial, but I believe it offers more useful insight into the process by way of which mediating technologies eventually get subducted and disappear from daily use.