The following is a very lightly edited version of something I wrote for the newsletter I published on a weekly basis all through 2015. I always understood these pieces as ephemera, and so my policy was that there would be no persistent archive of them, and no way for anyone to read a weekly entry they hadn’t received by virtue of being subscribed to the newsletter at the time it was published. They were strictly of and for the moment.
I still think that was a sound policy. But not a week goes by that someone doesn’t ask me to repost the following, and in the interest of saving everyone some time I figured I’d do so here. For reasons that I cannot fathom, it remains the single most-requested among the sixty-odd newsletters I published last year. Usual disclaimers apply, but I hope you enjoy it.
Many of you will recall that for the two years before we moved to London, I was in the habit of convening drinks every Friday night at Temple Bar on Lafayette Street. This standing get-together, imaginatively dubbed FRIDAYS AT 7, remains one of the best things I’ve ever been involved with. I still derive an enormous amount of satisfaction out of having brought this particular assortment of people together, still glow from the memory of a great many great nights, and to this day try to arrange a FATSEVEN gathering whenever I happen to be in New York City for more than 48 hours or so.
But it also taught me something very deep about the nature of human socialization. You should know that I inherit from my mother a profound tendency to want to please everyone I’m interacting with, at least in certain contexts — even when there are more than two people involved, even when some of those people disagree with or outright dislike one another. Now, this can be a beautiful trait. Buried within it, I’m sure, are the seeds of some future generation’s ability to settle all invidious contentions, bring all parties to a common table and drape the world in universal harmony. But of all the troublesome tendencies in my psychological makeup (and there are a few), this one quality has perhaps caused more chaos in my various relationships and jobs than any other.
Because as it happens, you just can’t give everyone you know everything they want. I’m not necessarily saying that all relationships are brutally zero-sum games of resource management, but, y’know, they take place inside history. Like anything else that does, they’re subject to entropy, scarcity, the rules of physics. That I can see, there are no Pareto-optimal solutions for interpersonal relationships, any more than there are for any other system above a certain threshold of complexity. They’re like a three-body problem. (Sometimes they are a three-body problem.)
It turned out that my dearly beloved FRIDAYS AT 7 crew was like that. Now, I need to do a little bit of stage-setting, so you understand the particular dynamic at play here. Though to a one they were (and are) all fascinating, funny, talented and endearing, not everyone who came to drinks on a regular basis had necessarily tasted success as the world defines it. But there was a subset of folks there who had done so, and by any rational standard these were all accomplished people. They’d published well- and widely-reviewed books, or shown films at world-famous festivals, or played a part in the development of some piece of software you use on a daily basis.
I certainly don’t think any more highly of them because this happens to be the case, because god knows why any of our lives break the way they do. But naturally I admired them for their achievements, as well as for the other things that commended them to my friendship in the first place. And I had assumed that within the social universe of the particularly accomplished, there existed something like a consensus that anyone you might care to name more or less knows what they’re talking about.
And so I’ll confess that it floored me when late one night, on hearing me praise a mutual acquaintance who I myself did consider to be highly accomplished, one of these people said, “I can’t believe you rate that guy. He’s just such total bullshit.” Laboring under my maternal inheritance (which I eventually came to recognize as a mutant strain of Geek Social Fallacy #4, actively operating in both my mother and I decades before it was identified and named as such), it had never occurred to me that some objectively high-achieving people might regard one another in this way.
Yeah, I know. You’d think I would have figured this out on the dewy side of forty, come to some much earlier insight into how contingent and variable human reputation can be. I dunno — maybe I cut class that day. Either way, it wasn’t until that very moment that I realized how acutely uncomfortable my praise of this third party was making my friend. It was clear to me, in fact, that he would begin to question my own judgement if I insisted on proceeding too much further down this path. The conversation would get awkward, then actively difficult, and then who knows? maybe the friendship would too. Doors of perception blasted wide by my third Stolichnaya martini of the evening, I began to wonder how many other times over the years I had put someone in just this uncomfortable position.
I realized on the spot that what I needed was a Master Bullshit Matrix.
The Master Bullshit Matrix, as I saw it in that blinding flash of insight, would take the form of a very large (but mercifully finite) spreadsheet. In its cells would be recorded — would reside for all time — a complete accounting of just who considers whom to be Bullshit. Accomplished or not, celebrated or not, by definition there would be a place for everyone on the Master Bullshit Matrix, and then we’d all finally be able to reckon just where we stood.
On its face, compiling any such thing would certainly appear to be a spectacularly mean-spirited and juvenile thing to do: the kind of effort snotty fourth-graders set themselves to, when deciding who is and is not allowed to sit at their lunch table. But as I imagined it, the point of the Master Bullshit Matrix was letting everyone involved in one of these conflicts of appraisal save a little face.
Armed with the Master Bullshit Matrix, I wouldn’t embarrass myself (or anyone else) by continuing to insist on the quality of someone the person I was talking to considered Bullshit. Not unless I wanted to, anyway. In any given moment, I could decide whether or not I wanted to press the case for someone’s non-Bullshitness, teasingly needle someone by dropping the name of someone I knew full well they thought was Bullshit, or avoid the topic entirely. I could even cross-reference a particular intersection of personalities, and learn whether the Bullshit judgement ran one-way or two-way.
Please do not mistake me to be saying that good conversation requires agreement about everything — that you should ever be insincere yourself, or commit yourself to a position you do not in fact hold, just for the sake of someone’s momentary comfort. But there are clearly times when the greater good of social ease requires the deft avoidance of certain conversational minefields. And as I came to understand so late in life, you enter one of those minefields in arguing for someone’s transcendent genius…when your interlocutor believes that person to be Bullshit.
In an attempt to see what it might take to populate the Master Bullshit Matrix, I gently began to probe certain of my more forthright friends for their opinions. All of them understood the question immediately, offering their own personal Bullshit nominations without hesitation. What I found most interesting was that some of these nominations — many of them, in fact — came to me as a complete surprise. It reinforced my sense that there’s absolutely no predicting ahead of time who is going to strike someone else as Bullshit.
Broadly speaking, what seemed to make someone vulnerable to the charge that they were Bullshit? It’s hard to pin down precisely, but certain qualities seemed to crop up fairly often. The perception of insincerity, chiefly. Intellectual laziness, from someone my interlocutor believed that we can and should expect better of. Posturing. Ideology when it appeared to be deployed for craven professional, financial or sexual advantage.
There seemed to be some overlap with Dunning-Kruger syndrome, but not entirely so – it is broadly acknowledged that some people just can’t help being dumb, and while they may not be aware that they are dumb, this in itself doesn’t necessarily make them Bullshit. In other people, however, the behavior that constitutes reasonable ground for a Dunning-Kruger diagnosis is 100% the same thing that makes them Bullshit.
Note, too, that the quality of being Bullshit is something that mostly seems to reside at the professional or vocational level. Very importantly, there doesn’t seem to be anything preventing you from liking or enjoying the company of someone you believe to be Bullshit. Indeed, among the friends I talked to, some of their nominations were folks I know full well that they remained greatly fond of. These weren’t bad people. They were just Bullshit.
Of course the most interesting thing you could do with a Master Bullshit Matrix would be using it to discover who believes that you yourself are Bullshit. You could avoid wasting your time with those people; if you were particularly brave, you could even open up the question of your possible Bullshitness with them, and take steps to address the grounds for their belief, if any. Again, as I imagined it, anyway, the Master Bullshit Matrix would be a constructive tool for interpersonal growth and the avoidance of inadvertent offense, not a preteen’s nasty little cut-book. On this count I am probably being optimistic.
Is it possible to know that one is Bullshit? It’s hard to say. Perhaps, like the Dunning-Kruger effect itself, it’s a self-blinding condition: if you knew you had it, you wouldn’t have it. But it’s worth thinking about, isn’t it?
At the moment, I’m neck-deep in my Verso stablemates Nick Srnicek and Alex Williams’s still-newish book Inventing the Future; things remaining more or less stable schedulewise, I’ll most likely finish it later on today, or tomorrow at the latest.
It’s a strange book, Inventing. You may have caught some of the buzz around it, and that buzz exists for good reason. (It’s not just the superspiffy totebags Verso had ginned up for it, though I’m sure those do not hurt one whit.) At its heart a passionate argument against work and for an end to neoliberalism and its reality control — forged along the same rough lines as those Paul Mason and the Fully Automated Luxury Communism kids are currently touting — Inventing is a genuinely curious mixture of crystal-clear analysis, righteous provocation and infuriating naivety. If you’re even remotely interested in what emergent technologies like machine learning and digital fabrication might imply for our capacity for collective action, and especially if you think of yourself as belonging to the horizontalist left, you should by all means pick it up, read it for yourself and form your own judgments. (Here’s Ken Wark’s take on it; I endorse most of his thoughts, and have a great deal of my own to add, which I’ll do in the form of my own forthcoming book.)
Late in the book there’s a passage concerning the stance Srnicek and Williams feel the postcapitalist left needs to adopt toward the mainstream media: if the “counter-hegemonic” project they describe is to have any hope of success, they argue, “it will require an injection of radical ideas into the mainstream, and not just the building of increasingly fragmented audiences outside it.”
Well. It must be said that this is not one of the book’s high points. In its latent suggestion that the only reason Thomas Piketty and Donna Haraway aren’t cohosting a lively, popular Sunday-morning gabfest on NBC right this very moment is because we, the progressive public, are somehow not trying hard enough, or have failed to sufficiently wrap our pointy heads around the awesome conditioning power of the mass media, in fact, it’s somewhere between irritating and ridiculous. (It’s hard for me to see how Srnicek and Williams’s argument here is substantively any different from that stroke of market-savvy inspiration the beloved but famously marginal Minutemen skewered on the cover of their second-to-last album. And now you know where the title of this post came from.)
Nevertheless, they’re onto something. Though that more-than-faintly patronizing tone never quite dissipates, S&W eventually find themselves on far firmer ground when they argue that “[l]eftist media organizations should not shy away from being approachable and entertaining, gleaning insights from the success of popular websites.” I was able to shake off the momentary harrowing vision I had of Leninist Buzzfeed, and press on through to what I take to be their deeper point: radical thought can actually resonate broadly when care is taken to craft the language in which that thought is expressed, and still more so when insular, self-congratulatory obscurity is avoided in the design of its containers. I endorse this notion wholeheartedly. This recent appreciation of Jacobin hits many of the same notes; whatever you think of Jacobin‘s politics, it’s hard to deny that its publishers consistently put together a sprightly, good-looking read. (I’d call it “the Monocle of the left,” but that would be to imply that Monocle‘s content is far more compelling than in fact it is.)
You might still argue that S&W ought to spend a little more time with McLuhan. My own feeling is that there’s more to distrust about the “mainstream media” than merely its overtly political content — that consuming information in the form of tweets, listicles, Safety Check notifications, screens overloaded with crawlers, and possibly even glowing rectangles themselves is hard to square with the kind of awareness I at least find it necessary to cultivate if I’m to understand anything at all about the way the systems in which I’m embedded work.
But ultimately, these are quibbles. I agree with S&W when they argue that overthrowing the weaponized “common sense” of the neoliberal era is an explicitly counter-hegemonic project; that developing a functioning counter-hegemony is something that requires longterm commitment; and that those with truly radical programs need to reconsider the relationship between “pop,” “popular” and “popularity” if that whole hearts-and-minds thing is ever going to work out for them. (I’m honor-bound to point out that Saul Alinsky said as much fifty years ago, but perhaps that too is a quibble.) So: no. I have no problem at all with presenting complex and potentially challenging ideas accessibly, so long as they can be rendered accessible without dumbing them down. If successful counter-hegemonic media looks a whole lot more like a Beyoncé video than some preciously anti-aesthetic art installation, so much the better. Bring on the hit songs.
We can surely read the various technologies of the quantified self as tending to “ensure that people continue to act and dream without any form of connectedness and coordination with others” (Stavrides), and this quick, cogent piece will only reinforce that sense.
Now, I can imagine a world — just barely, but it can be done — in which the capture of biometric measurements by a network with qualities of ubiquity and persistence was somehow not invidious. I can even imagine a world in which that capture resulted in better collective outcomes, physically, psychically and socially. But in our world, the one we actually live in, I think the very best we can possibly hope for from these technologies is positive-sum competition, a state in which each of our individual outcomes only improve for the fact that we are set against each other.
That’s the best-case scenario. Even that is still competitive, still oriented solely toward the individual, still only bolsters the unquestioned supremacy of the autonomous liberal subject. And far more likely than the best case, frankly, is the case in which data derived from these devices is used to shape life chances, deprive us of hard-won freedoms at work, mold the limits of permissible expression or even bring violence to bear against our bodies.
My bottom line is this: Though I’d be happy to be proven wrong, given everything I know and everything I’ve seen it is very, very difficult for me to imagine socially progressive uses of quantified-self technologies that do not simultaneously generate these easily foreseeable, sharply negative consequences. It may be my own limitations speaking, but I can’t see how things could possibly break any other way. In this world, anyway.
Wagner James Au, who would know, has what in a better world would be an incendiary piece in the latest Wired. Au’s piece lays it all right out there, regarding the meaning and purpose of virtual reality.
As VR’s leading developers straight-up admit in the piece, its function is to camouflage the inequities and insults of an unjust world, by offering the masses high-fidelity simulations of the things their betters get to experience for real. Here’s the money quote, no pun intended: “[S]ome fraction of the desirable experiences of the wealthy can be synthesized and replicated for a much broader range of people.” (That’s John Carmack speaking, for future reference.)
I always want to extend to those I disagree with some presumption of good will. I don’t think it’s either healthy or productive or pleasant for the people around me to spend my days in a permanent chokehold of high dudgeon. And I always want to leave some room for the possibility that someone might have been misunderstood or misquoted. But Au is a veteran reporter on this topic; I think it’s fair to describe his familiarity with the terrain, and the players, as “comprehensive.” So I rather doubt he’s mischaracterized Carmack’s sentiments, or those of Oculus Rift founder Palmer Luckey. And what those sentiments amount to is outright barbarism — is nothing less than moral depravity.
The idea that all we can do is accede to a world of permanent, vertiginous inequity — inequity so entrenched and so unchallengeable that the best thing we can do with our technology is use it as a palliative and a pacifier — well, this is everything I’m committed to working against. Thankfully there are others who are also doing that work, who understand the struggle as the struggle. Thankfully, I think most of us still understand Carmack’s stated ambition as vile. We do, right?
I’ll have more to say about the uses of VR (and its cousin augmented reality, or AR) shortly.
Jeremy Rifkin’s Zero Marginal Cost Society is a book that’s come up a few times in discussions here, and while I may have mentioned that I have multiple problems with it — its transparent assembly by interns, the guileless portrayal it offers of the Internet of Things, and particularly some of the lazy methods of argumentation Rifkin occasionally indulges in — it gets one thing so thunderingly right that it is worth quoting at some length.
The following is the best short description of the neoliberal evisceration of the public sphere between 1979 and the present I have ever come across. It resonates with my experience in every particular — and I’ve lived through this, seen it unfold on both sides of the Atlantic. If you were born anytime after, oh, 1988 or so, it will be very useful in helping you understand just what has been done to your world, and to you.
I’ll be honest with you: Sometimes I want to weep for what we’ve lost. Just the enumeration in the very first paragraph is almost overwhelming.
The Reagan/Thatcher-led economic movement to privatize public goods and services by selling off telecommunications networks, radio frequencies, electricity generation and transmission grids, public transport, government-sponsored scientific research, postal services, rail lines, public lands, prospecting rights, water and sewage services, and dozens of other activities that had long been considered public trusts, administered by government bodies, marked the final surrender of public responsibility for overseeing the general welfare of society.
Deregulation and privatization spread quickly to other countries. The magnitude of the capitulation was breathtaking in scope and scale. Governments were hollowed out overnight, becoming empty shells, while vast power over the affairs of society shifted to the private sector. The public, at large, was stripped of its “collective” power as citizens and reduced to millions of autonomous agents forced to fend for themselves in a marketplace increasingly controlled by several hundred global corporations. The disempowerment came with lightning speed, leaving little time for public reaction and even less time for public engagement in the process. There was virtually no widespread debate at the time, despite the breadth of the shift in power from the government to the private sector, leaving the public largely unaware and uninvolved, although deeply affected by the consequences.
For the most part, free-market economists, business leaders, neoliberal intellectuals, and progressive politicians — like President Bill Clinton of the United States and Prime Minister Tony Blair of the United Kingdom — were able to prevail by portraying the market as the sole key to economic progress and castigating critics as old fashioned and out of touch or, worse, as Soviet-style apologists for big government. The collapse of the Soviet empire, with its widespread corruption, inefficiencies, and stagnant economic performance was trotted out at every occasion as a whipping boy and proof positive that the well-being of society would be better assured by placing all the economic marbles in the hands of the market and letting government shrivel to the most rudimentary of public functions.
Large segments of the public acquiesced, in part because they shared a sense of frustration and disappointment with government management of goods and services — although much of the ill feeling was contrived by a business community anxious to penetrate and mine a lucrative economic largesse that had long remained under government auspices and beyond the reach of the market. After all, in most industrialized countries, publicly administered goods and services enjoyed an enviable track record. The trains ran on time, the postal service was dependable, government broadcasting was of a high quality, the electricity networks kept the lights on, the telephone networks were reliable, the public schools were adequate, and so forth.
In the end, free-market ideology prevailed.
After this rather brutal, unremitting account, it is true that Rifkin points us at the global Commons he perceives aborning as a legitimate source of hope. Let us, in turn, hope that he’s onto something. To quote someone I hold in the deepest contempt, there really is no alternative.
Twenty-five years ago, just after the outbreak of the first Gulf War, I moved into an anarchist co-op in the Upper Haight. (If you know the neighborhood at all well, you’ve almost certainly stood beneath my room: the bay window jutting directly above the ATM on Belvedere Street, at the time and for many years thereafter the only one for over a mile in any direction.) Though its every fiber was saturated with the sad pong of sexually deprived male bitterhippies in early middle age, the flat nevertheless (/therefore?) boasted one of the most impressive specialist libraries I’ve ever encountered.
No doubt because many of the flat’s residents had historically been associated with the Haight’s anarchist bookstore, Bound Together, its shelves had over the years accumulated hundreds of rare and unusual books on squatting, DIY technique, self-housing, revolutionary syndicalism, the politics of everyday life and so on. Among these was a curious 1976 volume called Radical Technology. Something between a British Whole Earth Catalog and an urban Foxfire book, Radical Technology presented its readers with a comprehensive and detailed blueprint for self-reliant, off-the-grid living.
Each of the book’s sections was fronted by an elaborate illustration depicting what typical British spatial arrangements — terraced housing, allotments, council estates, parish churches — might look like after they’d been reclaimed by autonomist collectives, in some not too terribly distant future. Unlike some of the more heroic imaginaries that were floating around in that immediate pre-Web epoch, you could readily imagine yourself living in their simple everydayness, making a life in the communal kitchen and sauna and printmaking workshop they depicted. From the material-economic perspective of someone residing in a shabby flat in the Upper Haight circa 1991, struggling to eke out a living as the city’s worst and clumsiest bike messenger, it would clearly be a good life, too: austere, perhaps, in some ways, but fulfilling and even generous in every register that really counts. (To be sure, this was a sense the illustrations shared with contemporary real-world outcroppings of late hippie technology in both its particularly British and its Bay Area variants, and I’d seen traces of it crop up in squats and urban homesteads back East, wherever someone resident had been infected by the Whole Earth/Shelter/Pattern Language ethos.)
I clean forgot about Radical Technology for a quarter century, but I never did forget those drawings. I had no way of reconsidering them, though, let alone pointing anybody else at them, until the other day, when Nick Durrant recognized my vague handwavings for what they were: a description of the “Visions” series anarchist illustrator Clifford Harper contributed to the mid-70’s British journal Undercurrents. (These issues of Undercurrents were subsequently anthologized as the book I’d come across; here’s scans of Harper’s entire series.) I had to smile when I read the account of “Visions” on Harper’s Wikipedia entry, as it could not possibly have been more on the nose:
These were highly detailed and precise illustrations showing scenes of post-revolutionary self-sufficiency, autonomy and alternative technology in urban and rural settings, becoming almost de rigueur on the kitchen wall of any self-respecting radical’s commune, squat or bedsit during the 1970s.
My memory of Harper’s “Visions” returned with such force not because I’d suddenly developed nostalgia for the lifeways of alternative San Francisco in the first ripples of its death spiral — though those house-feedingly enormous vegetarian stir-fries sure were tasty — but because the way of doing and being they imagined seems relevant again, and possibly more broadly so than ever before.
Something is clearly in the air. The combination of distributed, renewable microgrid power with digital fabrication, against a backdrop of networked organization, urban occupation and direct action, seems to be catalyzing into a coherent, shared conception of a way forward from the mire we find ourselves in. Similar notions crop up in Paul Mason’s Postcapitalism, in Jeremy Rifkin’s The Zero Marginal Cost Society (the particular naivety of which I’ll have more to say about in short order), in Nick Srnicek and Alex Williams’ Inventing the Future, and the same convergence of possibilities animated my own first pass at articulating such a conception, a lashed-up framework I rather cheekily called the “minimum viable utopia.”
These conceptions of the possible are all pretty exciting, at least to those of us who share a certain cast of mind. What they’re all missing, though, to a one, is a Cliff Harper: someone to illustrate them, to populate them with recognizable characters, to make them vivid and real. We need them to feel real, so when we print them out and hang them on the walls of flats where the rent is Too Damn High and the pinboard surfaces of the cubicles where we grind away the mindless hours, we remember what it is we’re working so hard to bring into being.
At the very least, we need them so that those who follow us a quarter century from now understand that they too belong to a lineage of thought, belief and action, just as anyone who’s ever been inspired in their work by the Harper illustrations does. Some days, just knowing that line through time exists is enough to get you through the day.
Compare and contrast:
– SHoP Architects, Dunescape, for the 2001 MoMA/P.S.1 courtyard competition.
— Zuloark Collective, el Campo de Cebada, Madrid, 2010.
Two of these projects involve the deployment of digital design and production techniques to create platforms for small-group conviviality, nestled inside larger spaces generally associated with high culture and the flows of capital that support it. The other two involve the use of low-end, commodity material to create platforms for face-to-face deliberation and the practice of democracy (as well as conviviality), deployed in marginal, interstitial or outright occupied spaces.
The appearance of a parallel evolution in these admittedly cherry-picked examples may say more about my wishful thinking than anything else. But it seems to me that there’s clearly something going on here, in the convergence of sophisticated digital design, on-site fabrication and software for the near-real-time user configuration of space in what we might call lightweight placemaking. In all of these projects, we see an emphasis on rapid mountability and demountability, and the mobility and highly sensitive user control they afford. We see high technique brought to bear on utterly commodified, widely available, broadly affordable (even free) materials. And we see these things used to bring people together, both to enjoy one another’s company and to discuss such matters of concern as arise before them.
There’s an especially lovely symbolism to the use of such humble materials in making the place of democracy, and if the use of commodity lumber doesn’t involve quite the same material rhetoric as the use of marble in the ennobling public spaces of the late 19th and early 20th centuries, well, neither is the public being invoked the same.
— SEE ALSO: Francis Cape’s We Sit Together, a history of the wooden bench in the American intentional-community tradition. Image courtesy Murray Guy Gallery.