Maybe someday I won’t have to talk about AI. For now, I have a new piece out called “Nothing, Forever: Remix Aesthetics for the AI Age,” published by Concordia University’s Global Emergent Media Lab for their ‘In Progress’ series, which is apt because it represents some preliminary thinking on my part as we struggle to make sense of the moment we’re in, drowning under what has been termed ‘AI slop.’
Ryan Broderick noticed that slop appeared to be the consensus take on what to call all this stuff we’re being confronted with, and took it upon himself to actually define it according to three characteristics: to the consumer, it is worthless (meaning it has no value); it feels forced upon us, whether by a company or by algorithm; and it is optimized to be as worthless and ubiquitous as possible.
While I quibble with some of his added details (he says Barbenheimer didn’t feel like slop but Deadpool & Wolverine does — I guess, but that seems more like a judgment of taste than anything else), this largely seems accurate: slop is useless and circulated ad nauseam whether we like it or not, and AI has hyper-accelerated all this to an impossible scale.
In my piece, I try to wrestle with this notion, even if I wrote it before ‘slop’ took off. Still, it feels somewhat inadequate, since it obfuscates distinctions that can still be made, important ones, and contains a certain level of acceptance about inevitability. This is the ever-present dilemma with so much tech criticism, as it can be very easy to buy into certain narratives of inevitability and doomerism in a way that ends up contributing to a self-fufilling deadendedness. I get it, I’ve done it!
By focusing on two examples in my piece, a short film using shots from Seinfeld without people in them and the AI livestream featuring Seinfeld characters, we can begin to see (I’d argue) some of these distinctions, as they both make use of the TV series as a repository of data (images, dialogue, iconography, patterns) for very different ends. In fact, the comparison is useful precisely because they actually have so little in common.
The way our online environments are becoming more and more saturated with AI-generated images and text will have implications beyond the obvious, as none of it is trustworthy and all of it is plausibly unreal. Of course, this has always been true, certainly online, but further back than that, too. As Kelly Clancy recently wrote for Nautilus, reflecting on Ramon Llull’s vision to craft a book as a mechanical logic machine which would answer any reader’s questions about God to convert the world to Christianity: “In the end, truth machines haven’t progressed much from Llull’s Ars Magna. The 13th-century zealot hoped to automate truth to dispel people’s uncertainty—instead we’ve automated the uncertainty.”
It may simply be a matter of scale, but the scale is towering. It is not something to be dismissed, even if we must resist the temptation to succumb to doomerism. We are plunging into an unknown abyss, and anyone claiming to know where we’re headed is probably (as ever) trying to sell you something. In the meantime, check out my piece!
At the same time, much has been made (among the nerdy losers that follow this stuff) about Goldman Sachs’ recent research paper about AI being overhyped, specifically as a vehicle for investment (following another, similar take from Sequoia Capital’s David Cahn last month). Whatever Sachs, one of the world’s biggest investment banks, says about the markets is worth paying attention to, and they wonder “whether this large spend [on generative AI] will ever pay off in terms of AI benefits and returns.” Sachs previously predicted that AI investment would reach $200 billion globally by next year, and this new paper appears to throw some water on the viability of AI for the rich and powerful in finance alongside diminishing returns economically and technologically. Even they can’t overlook how the promises coming out of companies like OpenAI and beyond seem increasingly outlandish, and the overhyping of how it will impact productivity is having a stifling effect on its outlook.
Let’s keep in mind, though, that the audience for a research paper like this is different from the banks’ actual investors and venture bigwigs — you do not need to hand it to Goldman Sachs. More of us all are certainly aware of the technology’s limitations and unreliability, and are thus questioning how much any of it will really change. In the paper, Jim Covello, Sachs’ head of global equity research, compares this “AI arms race” moment to “virtual reality, the metaverse, and blockchain,” which are “examples of technologies that saw substantial spend but have few—if any—real world applications today.”
While this is true in some sense, it also appears true that AI is being adopted nevertheless by far more businesses and organizations than any of those other examples, which means there’s no point pretending that it isn’t already having significant real world applications, even if they are vastly limited in scope. The Sachs paper, then, is an interesting piece of financial rhetoric, kowtowing to industrial, academic, and cultural skeptics at their level while speaking much differently about it in other settings, like earnings calls or asset management outlooks, which have tended to reassert the power of AI investments.
If anything, then, this is a reminder to reject any narrative of inevitability, because none of this is pre-determined, even as we are pushed into greater vortexes of confusion. The way out is through, as it’s said. These predictions and prognostications are almost always nothing more than self-interested bluster, fragmented storytelling intended to throw you off with its largesse, but pay enough attention and the cracks emerge. New stories are ready to be told, about AI futures and about what we might actually want from such a technology — if anything at all.
Ephemera
Solid take from Evgeny Morozov on Big Tech’s political consciousness for The Guardian: “Should big tech firms be allowed to use data from public institutions to train privately owned, lucrative AI models? Why not make the data accessible to nonprofits and universities? Why should companies such as OpenAI, backed by venture capital, dominate this space?”
For the nerds: An Economic Security Report edited by Becky Chao on building a new political economy for AI, if such a thing is possible: “U.S. policymakers have embraced regulation when social, political, and economic problems have arisen in lax regulatory environments historically in the banking, pharmaceutical, and transportation sectors; it’s time to embrace a similar comprehensive regulatory approach in the tech sector. It is clear that we need to tackle concentrated power at its roots by targeting the underlying business model.”
Quite good film analysis by Juan Camilo Velásquez for MUBI Notebook on digital impressionist films like Skinamarink, The Human Surge 3, and Aggro Dr1ft: “Paradoxically, cutting-edge digital technologies push the mechanics of cinema away from indexicality and closer to painting and other graphic arts. Some filmmakers have responded by rejecting realism at a time when it is more attainable than ever.”
A.S. Hamrah eviscerates Emily Nussbaum’s new book about reality TV for Bookforum: “Knowing who these people were was part of my job, and if all this is a self-own, so be it. My job was valuable to me not just because it paid my bills but because of what I learned from it as a critic. Seeing the sausage get made at the C-suite level of television production, and then analyzing the fandom of the sausage, made me realize that every negative thing ever written about TV was true.”
Watch Al Jazeera’s harrowing documentary The Night Won’t End: Biden’s War on Gaza.
Song Rec: “Do you wanna” by Astrid Sonne