Google’s recent announcement of App Inventor is one of those back-to-the-future moments that simultaneously stirs up all kinds of furtive and long-suppressed hope in my heart…and makes me wonder just what the hell has taken so long, and why what we’re being offered is still so partial and wide of the mark.
I should explain. At its simplest, App Inventor does pretty much what it says on the tin. The reason it’s generating so much buzz is because it offers the non-technically inclined, non-coders among us an environment in which we can use simple visual tools to create reasonably robust mobile applications from scratch — in this case, applications for the Android operating system.
In this, it’s another step toward a demystification and user empowerment that had earlier been gestured at by scripting environments like Apple’s Automator and (to a significantly lesser degree) Yahoo! Pipes. But you used those things to perform relatively trivial manipulations on already-defined processes. I don’t want to overstate its power, especially without an Android device of my own to try the results on, but by contrast you use App Inventor to make real, usable, reusable applications, at a time when we understand our personal devices to be little more than a scrim on which such applications run, and there is a robust market for them.
This is radical thing to want to do, in both senses of that word. In its promise to democratize the creation of interactive functionality, App Inventor speaks to an ambition that has largely lain dormant beneath what are now three or four generations of interactive systems — one, I would argue, that is inscribed in the rhetoric of object-oriented programming itself. If functional units of executable code can be packaged in modular units, those units in turn represented by visual icons, and those icons presented in an environment equipped with drag-and-drop physics and all the other familiar and relatively easy-to-grasp interaction cues provided us by the graphical user interface…then pretty much anybody who can plug one Lego brick into another has what it takes to build a working application. And that application can both be used “at home,” by the developer him- or herself, and released into the wild for others to use, enjoy, deconstruct and learn from.
There’s more to it than that, of course, but that’s the crux of what’s at stake here in schematic. And this is important because, for a very long time now, the corpus of people able to develop functionality, to “program” for a given system, has been dwindling as a percentage of interactive technology’s total userbase. Each successive generation of hardware from the original PC onward has expanded the userbase — sometimes, as with the transition from laptops to network-enabled phones, by an order of magnitude or more.
The result, unseemly to me, is that some five billion people on Earth have by now embraced interactive networked devices as an intimate part of their everyday lives, while the tools and languages necessary to develop software for them have remained arcane, the province of a comparatively tiny community. And the culture that community has in time developed around these tools and languages? Highly arcane — as recondite and unwelcoming, to most of us, as a klatsch of Comp Lit majors mulling phallogocentrism in Derrida and the later works of M.I.A.
A further consequence of this — unlooked-for, perhaps, but no less significant for all of that — is that the community of developers winds up having undue influence over how users conceive of interactive devices, and the kinds of things they might be used for. Alan Kay’s definition of full technical literacy, remember, was the ability to both read and write in a given medium — to create, as well as consume. And by these lights, we’ve been moving further and further away from literacy and the empowerment it so reliably entrains for a very long time now.
So an authoring environment that made creation as easy as consumption — especially one that, like View Source in the first wave of Web browsers, exposed something of how the underlying logical system functioned — would be a tremendous thing. Perhaps naively, I thought we’d get something like this with the original iPhone: a latterday HyperCard, a tool lightweight and graphic and intuitive as the device itself, but sufficiently powerful that you could make real things with it.
Maybe that doesn’t mesh with Apple’s contemporary business model, though, or stance regarding user access to deeper layers of device functionality, or whatever shoddy, paternalistic rationale they’ve cooked up this week to justify their locking iOS against the people who bought and paid for it. And so it’s fallen to Google, of all institutions, to provide us with the radically democratizing thing; the predictable irony, of course, is that in look and feel, the App Inventor composition wizard is so design-hostile, so Google-grade that only the kind of engineer who’s already comfortable with more rigorous development alternatives is likely to find it appealing. The idea is, mostly, right…but the execution is so very wrong.
There’s a deeper issue still, though, which is why I say “mostly right.” Despite applauding any and every measure that democratizes access to development tools, in my heart of hearts I actually think “apps” are a moribund way of looking at things. That the “app economy” is a dead end, and that even offering ordinary people the power to develop real applications is something of a missed opportunity.
Maybe that’s my own wishful thinking: I was infected pretty early on with the late Jef Raskin’s way of thinking about interaction, as explored in his book The Humane Interface and partially instantiated in the Canon Cat. What I took from my reading of Raskin is the notion that chunking up the things we do into hard, modal “applications” — each with a discrete user interface, each (still!) requiring time to load, each presenting us with a new learning curve — is kind of foolish, especially when there are a core set of operations that will be common to virtually everything you want to do with a device. Some of this thinking survives in the form of cross-application commands like Cut, Copy and Paste, but still more of it has seemingly been left by the wayside.
There are ways in which Raskin’s ideas have dated poorly, but in others his principles are as relevant as ever. I personally believe that, if those of us who conceive of and deliver interactive experiences truly want to empower a userbase that is now on the order of billions of people, we need to take a still deeper cut at the problem. We need to climb out of the application paradigm entirely, and figure out a better and more accessible way of representing distributed computational processes and how to get information into and out of them. And we need to do this now, because we can clearly see that those interactive experiences are increasingly taking place across and between devices and platforms — at first for those of us in the developed world, and very soon now, for everyone.
In other words, I believe we need to articulate a way of thinking about interactive functionality and its development that is appropriate to an era in which virtually everyone on the planet spends some portion of their day using networked devices; to a context in which such devices and interfaces are utterly pervasive in the world, and the average person is confronted with a multiplicity of same in the course of a day; and to the cloud architecture that undergirds that context. Given these constraints, neither applications nor “apps” are quite going to cut it.
Accordingly, in my work at Nokia over the last two years, I’ve been arguing (admittedly to no discernible impact) that as a first step toward this we need to tear down the services we offer and recompose them from a kit of common parts, an ecology of free-floating, modular functional components, operators and lightweight user-interface frameworks to bind them together. The next step would then be to offer the entire world access to this kit of parts, so anyone at all might grab a component and reuse it in a context of their own choosing, to develop just the functionality they or their social universe require, recognize and relate to. If done right, then you don’t even need an App Inventor, because the interaction environment itself is the “inventor”: you grab the objects you need, and build what you want from them.
One, two, many Facebooks. Or Photoshops. Or Tripits or SketchUps or Spotifys. All interoperable, all built on a framework of common tools, all producing objects in turn that could be taken up and used by any other process in the weave.
This approach owes something to Ben Cerveny’s seminal talk at the first Design Engaged, though there he was primarily concerned with semantically-tagged data, and how an ecosystem of distributed systems might make use of it. There’s something in it that was first sparked by my appreciation of Jun Rekimoto’s Data Tiles, and it also has some underlying assumptions in common with the rhetoric around “activity streams.” What I ultimately derive from all of these efforts is the thought that we (yes: challenge that “we”) ought to be offering the power of ad-hoc process definition in a way that any one of us can wrap our heads around, which would in turn underwrite the most vibrant, fecund/ating planetary ecosystem of such processes.
In this light, Google’s App Inventor is both a wonderful thing, and a further propping-up of what I’m bound to regard as a stagnating and unhelpful paradigm. I’m both excited to see what people do with it, and more than a little saddened that this is still the conversation we’re having, here in 2010.
There is one further consideration for me here, though, that tends to soften the blow. Not that I’m at all comparing myself to them, in the slightest, but I’m acutely aware of what happens to the Ted Nelsons and Doug Engelbarts of the world. I’ve seen what comes of “visionaries” whose insight into how things ought to be done is just that little bit too far ahead of the curve, how they spend the rest of their careers (or lives) more or less bitterly complaining about how partial and unsatisfactory everything that actually does get built turned out to be. If all that happens is that App Inventor and its eventual, more aesthetically well-crafted progeny do help ordinary people build working tools, firmly within the application paradigm, I’ll be well pleased — well pleased, and no mistake. But in some deeper part of me, I’ll always know that we could have gone deeper still, taken on the greater challenge, and done better by the people who use the things we make.
We still can.