The Writers Guild of America (WGA) went on strike on Tuesday, as their negotiations with the Alliance of Motion Picture and Television Producers (AMPTP), which represents the major studios, networks, and streaming companies, stalled when neither side would budge. In very clear language, the WGA announced: “The companies’ behavior has created a gig economy inside a union workforce, and their immovable stance in this negotiation has betrayed a commitment to further devaluing the profession of writing.”
Many others have already written careful analysis on the strike, and on many of the issues at stake, including data on how pay has not kept apace with inflation, so-called “mini rooms” (smaller writers’ rooms which hampers career advancement), and the steep fall of residuals in the age of streaming.
Of more interest to me, though, is how the strike has shaped up to be the first major battle over how artificial intelligence is going to impact jobs as more and more companies seek new ways to cut corners or save on labour costs. This has been a topic of renewed sociocultural interest since ChatGPT, DALL-E, and other tools have exploded in popularity over the last year or so, and this is certainly the most high-profile example to date of workers fighting to protect themselves. The way that the WGA negotiations over AI turn out will likely have a powerful influence over how it comes to be discussed moving forward in this context, so it’s important to pay attention.
First, let’s understand the WGA’s very simple, very reasonable demands about AI. As listed in their full two-page document, the WGA asks: “Regulate use of artificial intelligence on [Minimum Basic Agreement]-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.” The AMPTP’s response? “Rejected our proposal. Countered by offering annual meetings to discuss advancements in technology.” In essence, they want to be able to co-opt technology to their own ends.
The WGA, meanwhile, doesn’t want writers to be asked to adapt text written by AI, thus muddying credit-giving, and that AI can’t be used to rewrite their own work, which would also help to train the AI further. As the group tweeted: “It is important to note that AI software does not create anything. It generates a regurgitation of what it's fed,” and, moreover, “plagiarism is a feature of the AI process.”
As ChatGPT in particular has taken off, the discourse has largely been focused on how it will replace writers of all kinds. What the AMPTP seems to want here reflects how most corporations and institutions are thinking about what this form of generative AI offers them: increased power and productivity with the added bonus of saving on labour costs.
As Alissa Wilkinson rightfully points out at Vox, when the WGA was last on strike in 2007, a major sticking point was streaming residuals. This was relatively early on in the age of streaming services — Netflix had only just launched its streaming option at the beginning of that year, with a rather small library, and it would take years for it to begin developing its own original series. In other words, streaming was in a similar spot in 2007 as AI seems to be now in 2023, and the WGA won very small residuals on streaming at the time, far smaller than what they got from broadcast, without knowing, of course, just what a powerhouse it would become throughout the industry. So you could say that they don’t want to repeat past mistakes by saying yes to miniscule concessions in the face of powerful new technology.
As Wilkinson writes, “The threat AI poses to creative writers is hard to fully imagine, because right now, AI tools are still pretty rudimentary. You can ask an AI to write essays or ideas or screenplays, and what it spits out has the creativity of a medium-bright 10th grader, regurgitated from the content it’s been trained on and unreliable when it comes to things like facts.” But the tech is advancing quickly, and more importantly, companies like Netflix have already expressed their extreme interest in using it to replace workers, beginning with animators. Meanwhile, major Marvel director Joe Russo recently stated that AI could be creating films within 2 years, as the tech will “change storytelling.” To be clear, AI will not be creating films within 2 years, but the point is that the people in power in Hollywood are looking to AI as a way to save money in the face of a shrinking economy and tighter market competition.
What’s really happening here is the corporate execs are betting on the supremacy of the Good Enough. As Ryan Broderick wrote in his newsletter today, “First, corporations will happily sacrifice quality for infinite amounts of passable AI sludge. And, second, unless you have some mechanism for collective action in your workplace, your boss is going to try and figure out a way to replace you with an AI.” The major studios in particular have conditioned audiences for years to get used to a certain kind of formulaic quippy writing, and so it seems roughly believable that they are more than open to AI substitutions that are likewise Good Enough because they will be cheaper, more efficient, and more predictable. And some weirdos seem into it:
Broderick further points to examples like Disney’s push to turn actor likenesses into becoming the company’s intellectual property, and he quotes from the American University Business Law Review: “Once an actor signs on to a project within a large and profitable franchise like Star Wars or Marvel, their likeness and voice could become part of the intellectual property due to its affiliation with the character.” You take this and add in AI generated scripts, edits, effects, some kind of suite of automated tools, well, that sounds pretty good if you assume audiences will allow it. It’s a classic cost-benefit analysis, and these companies seem to at least be open to trying it out.
The WGA’s demand to make sure that an AI tool can’t be credited for a script is a clear way to stop that from happening, and so it is very telling that the AMPTP isn’t interested in it. As screenwriter John August told Wilkinson, “I see AI not so much as a threat to replace writers, but to push our pay lower,” which underlines that at the very least, the companies would try to pay writers less to punch up AI-generated scripts rather than replace them entirely. This is what the other WGA demand, about not using AI as source material, addresses.
It’s worth noting that the AMPTP is made up of more traditional studios, like Disney and Warner Bros, as well as streaming companies like Netflix and ecommerce giant Amazon. As TV writer Melissa Salmons pointed out to Jacobin reporter Alex Press, these companies actually have very divergent interests and value different things, and yet are forced to negotiate as one. One result of that, I’d speculate, is that an issue like AI must be addressed in a broader way, so that the only response all of the studios and streamers could agree on was annual meetings on the topic.
Moreover, as Lina Khan, chairperson of the Federal Trade Commission, argues in a New York Times op-ed today, AI is entirely unregulated, which means that the WGA’s fight is truly at the forefront of how we’re going to address it moving forward. Katie Kilkenny and Ashley Cullins reported today that some big executives are already asking questions about how they could utilize generative AI if the strike goes on for months, like the previous ones in 2007 and 1987 did.
The real threat that the WGA strike addresses, in the context of AI, is the cult of the Good Enough. It’s worth emphasizing that we don’t need to buy into the obscene hype around ChatGPT and other generative AI tools to take the threat they pose seriously. As ever, it’s not about the actual ability of the tech, it’s about who will wield it and how. In this case, the studios and streamers envision a gig work economy, where an AI spits out a rough draft and an actual human writer is brought in to polish it up for a smaller fee, to make it Good Enough. We, and the writers, deserve more.
Ephemera
One of the greatest living film critics, Richard Brody, published a list of the 20th century’s greatest independent films, which is well worth digging into. Some are obvious in their now-classic status, such as The Rules of the Game and The Gold Rush, but Brody also highlights films you may not think of as independent, like Psycho, and other somewhat lesser-known titles like Zeinabu irene Davis’ Compensation.
What happened to Jordan Neely, an unhoused person murdered on a NYC subway by a vigilante former Marine, is unspeakably horrible. Adam Johnson wrote for his newsletter about how this fits into larger insidious media narratives about the unhoused in our society.
Sarah Jeong wrote about the new hot social media platform, Bluesky, which doesn’t sound very great to me. I mean, first of all, it’s founded by Twitter founder Jack Dorsey. No thanks! As Jeong writes, “The thing people like the most about Bluesky seems to be the energy — a slippery, ever-changing thing, wholly reliant on fortune’s whims.” We’ll see if it lasts after the novelty wears off, but this doesn’t seem like the utopian post-platform era I was promised!
Song Recommendation: “Free Yourself” — Jessie Ware