Many within and outside the film industry, led by AI boosters, have been talking a lot recently about the idea of fully personalizable movies made entirely via AI. The pitch is that consumers will soon be able to use AI to spit out a feature-length film exactly to their specifications: say, a dark comedy starring a Will Smith deepfake as a man who can’t stop slapping people in the style of Wes Anderson, or whatever.
This struck a new nerve recently with the announcement from Fable, an AI company responsible thus far for a semi-viral AI-generated episode of South Park as proof of concept, that they plan to launch a streaming platform called Showrunner (eyeroll) which will (allegedly) write, voice, and animate full episodes of existing series in their library, based on user prompts of only a few words. It seems to, ultimately, be a way to cut down on costs in the creative industries, taking the model of user-generated content familiar to us all on platforms like YouTube and extending it to serialized content with the sheen of an actual production (only animation for now), without any work.
As Fable CEO Edward Saatchi put it, “The vision is to be the Netflix of AI. Maybe you finish all of the episodes of a show you’re watching and you click the button to make another episode. You can say what it should be about or you can let the AI make it itself.” Like all of these startups, their AI tech is trained on “publicly available data,” and Saatchi’s response to concerns about copyright is simply that “What matters to me is whether the output is original” and that the “content is what will decide whether the tech is worthwhile.” That’s settled, then!
I have more to say about this, obviously, but first I want to ask: who is this for? Seriously, who would want this beyond, I guess, experimenting with it a few times for fun? Whose idea of entertainment is this?
To be sure, Fable is just one absurd AI startup among many, trying to cash in on a moment of hype — this Saatchi guy also said “AI can definitely make better episodes of The Simpsons today,” so we’re not exactly dealing with the tech industry’s best and brightest.
And yet, they certainly have investment, the company won an Emmy for “innovation” in 2019, and they play into much larger fears about AI and the film and television industries. There is a palpable anxiety that, as audiences have been trained in the Netflix age to expect greater levels of personalization, and to accept increasing blandness in the content they consume (it is only “content,” after all) as it all becomes background noise to phone-scrolling, it may not be much of a stretch to imagine that they will, in the near future, accept AI slop that likewise has the veneer of personalization and the appearance of production value. For a culture defined by the Good Enough factor (as I wrote about a year ago), this could theoretically be a winning corporate strategy.
This has been a “dream” for some since well before the age of OpenAI and its own apparent proof of what this text-to-video future might look like through its Sora software. Moreover, it fits into existing logics of the streaming era, which is defined by a Netflix-led push into extremely detailed recommendation algorithms that collect vast amounts of data about what and how we watch in order to push more of the same to us, keeping us glued to the platform. The imagined AI creations would simply be, then, the apotheosis of this logic.
Ashton Kutcher, for one, seems to be welcoming the AI takeover in entertainment after playing around with Sora, asking, “Why are you going to watch my movie when you could just watch your own movie?” Kutcher sees this as a good thing, as the result will be so much content that only the absolute best stuff will rise to the top: “Any one piece of content is only going to be as valuable as you can get people to consume it. And so, thus the catalyzing ‘water cooler’ version of something being good, the bar is going to have to go way up.”
As with so much of this AI discourse, the more extreme prognostications about what it will soon do (just think what it will look like in a year, two years, five!) is just a way to further boost the market share of these companies, spurring investment and keeping the wheels greased.
But beyond the obvious financial incentives for people like Saatchi, Kutcher (who last year raised $243 million for an AI fund), or OpenAI’s Sam Altman, what can account for the apparent desire for these fully-personalized AI “movies”?
I recall an article from February in The New Yorker by Joshua Rothman, who was also reacting to Sora. He wrote:
Yesterday morning, my son was making funny faces at his sister. She smiled up at him from her bouncer. I reached for my phone to film the scene, then remembered that I’d put it in the other room, to avoid distraction. I know this moment happened; I can picture it in my mind. It would be a great clip in my home movie. There’s a sense in which I can “prompt” my brain (“Remember when?”) and generate a memory in response. So why not prompt an A.I.? What, exactly, would be wrong with a fake video of a real thing?
Good question, Joshua! He goes on to directly relate this to literature, and the blurred lines between fiction and non-fiction, as in Karl Ove Knausgaard’s My Struggle, which was marketed as fiction but is largely pulled from Knausgaard’s own life and experiences. “We understand, intuitively, that text is always a rendering of an idea, and that ideas are fluid. We know a book is not a recording, and a text is always slippery,” Rothman continued. “Books move us, sometimes happily, beyond representation and into imagination. That, apparently, is where everything is headed.”
Putting aside the common journalistic acquiescence to the AI industry’s narrative of inevitability, this has also stuck with me because it seems to hone in on a fundamental misunderstanding among AI boosters about what art is, sure, but also representation itself. It is bizarre to describe the action of reading a book as a way to move beyond representation and into imagination — these are constitutive experiences. Any AI “movie” would be, by definition, an output based on the aggregation of data. This presents, as far as I can tell, a distinct rupture between representation and imagination that AI can never overcome.
As Rob Horning has written, “When representations become data, they reinforce the utility of the infrastructures (algorithmic decision-making systems, AI models, etc.) developed to exploit them. And that infrastructure in turn reinforces the power relations authorizing the data.” When you then regurgitate that data back into representation, original contexts are erased, and so is subjectivity.
In other words, the allure of personalization being sold to us here (which, again, they are betting we have been conditioned to accept/expect) is utterly false. The “movie” you would receive based on your detailed prompt is not personalized to your subject position; it is the mere rendering of objects, bereft of context, and spat out as the approximation of vast stolen images and sounds, therefore reinforcing the power relations of the AI model rather than serving up something uniquely suited to the prompting subject.
Of course, the most likely realistic future for this technology is that companies will attempt to save on labour costs by using it for visual effects and other digital work, cutting corners where possible until industry unions hopefully push back.
In the meantime, phony visions of choose-your-own-adventure AI filmmaking for the consumer at home will proliferate, regardless of technological ability or public buy-in. While AI tools can be useful for certain tasks in filmmaking, or could simply be a fun toy to experiment on ideas with, the reality is that the primary purpose here is, as always, to further enrich the the people behind these companies and to further hurt workers.
Art is about taking a risk to express something new. Don’t take my word for it: filmmaker Francis Ford Coppola, for instance, once said, “An essential element of any art is risk. If you don't take a risk then how are you going to make something really beautiful, that hasn't been seen before? I always like to say that cinema without risk is like having no sex and expecting to have a baby. You have to take a risk.” I mean, exactly! I might add that making a “movie” with AI is like having no sex and expecting to have a baby — you’re shooting blanks.
Ephemera
The great Edward Ongweso Jr. and Athena Sofides wrote an excellent political economic analysis of the insulin empire and pharma-capital for The Baffler: “Diabetic life has long been constrained and managed by a menagerie of bad-faith actors, from the paternalism undergirding the staunch starvation diets of early diabetes treatment to the pharmaco-medical industry’s stronghold on diabetes medication and technology today. And controlling the access and affordability of insulin has proven to be an, if not the most, integral piece of this regime. But the story of insulin is also one of reclamation: of agency, health education, and autonomy.”
Navneet Alang on why AI is a false god for The Walrus: “It’s not that one should simply resist technology; it can, after all, also have liberating effects. Rather, when big tech comes bearing gifts, you should probably look closely at what’s in the box.”
Lily Lynch for Noema on the troubling rise of billionaire startup cities: “All these aspirations are rooted in a desire to withdraw from existing polities and to escape their high taxes, regulations and the disorder of liberal democracy. It’s a yearning for new forms of self-governance and citizenship. In other words, they are political exit projects.”
Song Rec: “Top Dog” by Magdalena Bay (their new song “Death & Romance” is also very good, but brought me back to putting “Top Dog” on repeat)