What does AI do?
First of all, it doesn’t think. It takes inputs and emits outputs. That’s it.
A better question, then, is how can or should we understand the nature of these outputs? The artist Hito Steyerl recently described what she calls “mean images,” the results that we get from a generative AI machine like Stable Diffusion or Midjourney. As she puts it, these images
converge around the average, the median; hallucinated mediocrity. They represent the norm by signalling the mean. They replace likenesses with likelinesses. They may be ‘poor images’ in terms of resolution, but in style and substance they are: mean images.
The images produced by these tools are probabilities, based on the input received and averaged out across vast datasets into the output you receive, an inference of an inference. It has no referent in material reality, it is only a statistical rendering that visualizes “real existing social attitudes that align the common with lower-class status, mediocrity and nasty behaviour” (to illustrate, she provides an image she got from inputting “an image of hito steyerl” into Stable Diffusion, and the result is less than flattering — it’s mean; my own attempt, above, is uhhh less mean I guess lol?).
Steyerl proceeds to address the racial and labour implications of this technology, recalling not only misplaced desires to make facial recognition datasets more efficient at identifying non-White faces (“Police departments have been waiting and hoping for facial recognition to be optimized for non-Caucasian faces”), but also the exploitative microwork industry which hires harshly underpaid workers, often immigrants or refugees, to pore over the nastiest content within AI-trained datasets — terrorist attacks and natural disasters, gory violence, sexual abuse — to eliminate what might be objectionable to liberal Western users. This shows that most automation has we know it, from these image generators to self-driving car software, is not the result of a supersmart computer but of traumatic and under-remunerated microworkers all over the world (what Astra Taylor calls “fauxtomation”). It also shows that the ballyhooed problems with AI are routinely met with “fixes” that actually perpetuate and even intensify abuse and extractive exploitation.
Steyerl concludes, as I could only hope she would, by imagining otherwise:
Why not shift the perspective to another future—a period of resilient small tech using minimum viable configurations, powered by renewable energy, which does not require theft, exploitation and monopoly regimes over digital means of production? This would mean untraining our selves from an idea of the future dominated by some kind of digital-oligarch pyramid scheme, run on the labour of hidden microworkers, in which causal effect is replaced by rigged correlations.
Understandably, the how is left unsaid, because it’s a large collective project that has yet to emerge. The closest thing to a movement in this direction is what is coalescing around the Neo-Luddites, whereby we re-assess the received narrative of the original Luddites (see the Brian Merchant excerpt under “Ephemera” below for more on this) to understand them not as anti-technology but as thoughtful and proud workers unhappy with bosses introducing cost-saving machines intended to replace or degrade skilled work without recourse. In our own digital context, this would mean rejecting technological tools — like generative AI — that workers do not and, under our current economic system, cannot have collective ownership over. To be blunt, it’s about controlling the means of production (or the means of computation, as Cory Doctorow puts it in his new book).
In my estimation, we must have an intimate understanding of how these “digital-oligarch pyramid schemes” become so dominant in our collective imaginary of the future, and perhaps no one has been leading this charge so frustratingly ineptly as OpenAI CEO Sam Altman. In a New York Magazine profile headlined “Sam Altman is the Oppenheimer of Our Age,” subheadline “OpenAI’s CEO thinks he knows our future,” we learn that this man understands that we are scared of AI, agrees that we should be, but that we should also continue to trust him to bring it into our lives anyway, ethically and morally and all that jazz.
One idea, Altman said, would be to gather up “as much of humanity as we can” and come to a global consensus. You know: Decide together that “these are the value systems to put in, these are the limits of what the system should never do.”
The audience, as you might imagine, went quiet. First of all, Altman clearly has no plan. His technology is advancing according to no value system at all, it seems. Moreover, this half-hearted stab at some vague idea of democratic governance rings hollow as his company has ironically made its model closed-source to protect its early lead in the industry. Moreover, it’s a standard move to sidestep responsibility, as when Mark Zuckerberg or other CEOs “welcome regulation” but can’t possibly do anything different until governments or other bodies make decisions for them, which they simultaneously work to hinder through intense lobbying. As the story’s writer Elizabeth Weil smartly frames Altman’s words:
We — a tiny word with royal overtones that was doing a lot of work in his rhetoric — should just “decide what we want, decide we’re going to enforce it, and accept the fact that the future is going to be very different and probably wonderfully better.”
Again, this notion of decisions “we” should make is hilarious in this context, doing very typical tech bro work of hinting at his product’s inevitability (“I can lie to you and say, ‘Oh, we can totally stop it.’”) and its, and his own, inherent benevolence. “I have so much sympathy for the fact that something like OpenAI is supposed to be a government project,” he tells Weil, but it’s clearly the quote of a newly well-media-trained boss trying to talk down the privatized nature of his enterprise and the power within that. Of course the government should be doing this; it’s not, though 😇.
As Meredith Whittaker, the president of Signal, succinctly argues to Weil, referring to the datasets used by OpenAI and others that are full of copyrighted material:
“What we’re talking about is laying claim to the creative output of millions, billions of people and then using that to create systems that are directly undermining their livelihoods.” Do we really want to take something as meaningful as artistic expression and “spit it back out as derivative content paste from some Microsoft product that has been calibrated by precarious Kenyan workers who themselves are still suffering PTSD from the work they do to make sure it fits within the parameters of polite liberal dialogue?”
Altman is, at the end of the day, just another so-called boy wonder wannabe-Übermensch, pals with the same group of guys — Elon Musk, Peter Thiel — that have run Silicon Valley for years, the same club perpetuating the same myths and saying the same things about what they do, what they stand for, what they offer to the world. The only reason we find ourselves re-living it all over again, despite a so-called techlash and despite rising interest rates and an industry supposedly downsizing its expectations, is twofold: one, capital begets capital. Same as it ever was. The second reason is trickier: the technology behind AI, despite its current flimsiness and unreliability, has potential to change the world, mostly in disastrous ways for labour, art, industry, and, sure, humanity, precisely according to how it has been implemented so far and who is behind it. In the meantime, hopefully Altman will convene that global democratic force to discuss AI values and set standards based on collective ownership over the technology itself — my email is open, dude.
A final tidbit from the profile:
The Altman family ate dinner together every night. Around the table, they’d play games like “square root”: Someone would call out a large number. The boys would guess. Annie would hold the calculator and check who was closest.
Ephemera
An excerpt from Brian Merchant’s must-have new book, Blood in the Machine: The Origins of the Rebellion Against Big Tech, on what we can learn from the Luddites.
Incredible interview with the one and only Martin Scorsese: “‘Well, the industry is over,’ said Scorsese. ‘In other words, the industry that I was part of, we’re talking almost, what, 50 years ago? It’s like saying to somebody in 1970 who made silent films, ‘What do you think’s happened?’’ But, of course, Scorsese has theories. Studios, he said, are not ‘interested any longer in supporting individual voices that express their personal feelings or their personal thoughts and personal ideas on a big budget. And what’s happened now is that they’ve pigeonholed it to what they call indies.’”
Gaby Del Vale makes sense of the tradcath movement: “The past that tradwives want to return to, an anachronistic pastiche of rugged pioneer individualism and midcentury familial plenty, never really existed. The lifestyle they promote is, like the Neelemans’ faux-rustic kitchen, a thoroughly modern construction: its incongruous elements are concealed behind bespoke doors and linen curtains. These aesthetic signifiers, confused as they may be, point to periods of American history in which white families were prioritized above all others. And some tradwives are explicit about their desire for racial supremacy.”
Hell yeah WGA contract (and it’s great AI details)
Song Rec: “Can I Talk My Shit?” — Vagabon