Thank you so much to everyone who submitted questions! I received some thoughtful, probing, and challenging ones (especially about AI, natch), which is great, and I’ll do my best to have something useful to say.
Baily asks: What are your thoughts on resorting to IP and copyright law and potentially tightening those laws as a method of fighting against generative AI? Do you think there will be negative consequences across artistic industries from using that strategy (e.g. restricting the use of music samples, collage, "fair use" for criticism, etc), or is it just the best tool artists and writers currently have to defend their ability to make a living with their art?
This is a great question, because it taps into one of my fiercely held beliefs about AI and new technologies in general, which is that the regulatory response is always centred around the development and creation of entirely new guidelines or legislation or governing bodies, bypassing existing law and standards simply because it is seemingly assumed that they are too outdated to apply. If anything, this process is backward, and the consequence is that AI companies and others are more or less allowed to run amok while simultaneously calling for more regulation, as long as they’re involved in its drafting. As you suggest, it appears to me quite obvious that our starting point should be to apply existing laws and standards to AI, including intellectual property and copyright protections specifically in the case of generative AI art and models. With that said, you point to a potential danger of this method, which is that you risk over-correcting and being restrictive in the name of artistic protection. I am sympathetic to that possibility, of course, but in an ideal world, we would begin by restricting what OpenAI and others are able to do according to existing laws and then develop robust industry guidelines that are more direct to the specificities of generative AI and the technology these companies are wielding.
As Alondra Nelson pointed out in Foreign Affairs in January, “trying to outpace government regulation is the tech industry’s deliberate strategy to circumvent oversight,” and so “political leaders cannot again buy the myth—peddled by self-interested tech leaders and investors—that supporting innovation requires suspending government’s regulatory duties.” Moreover:
When it comes to technology, policymakers too often believe that their approaches are constrained by a product’s novelty and must be subject to the views of expert creators. Lawmakers can become trapped in a false sense that specific new technologies always need specific new laws.
Of course, this is Foreign Affairs, after all, so the article goes on to champion classical liberalism and falling back on constitutional democratic principles for a “positive vision for governing AI” — OK, whatever. The larger point is well-taken, however, that with these systems, “their uses are not preordained. Their effects are not inevitable.” To get back to your original question (I realize I’ve gone off a bit tangential), IP and copyright are potentially effective levers for artists to weaponize right now, but governance must take a much more holistic approach before the AI companies are able to impose that inevitability on us once and for all. For more lived experience insight on this, check out Paris Marx’s interview with concept artist Karla Ortiz, who mentions another useful, badass manoeuvre: algorithmic disgorgement.
Matt asks: I’m interested in how different streaming platforms handle content other than their own IP. As far as I know only Prime and Apple are using a conventional rental method for smaller titles. Do you think that this is helpful for the film medium at large? Do you think that these add any “relevance” to these platforms or if this model even makes money beside the overwhelming glut of platform produced content?
I have to admit I’m not as well-versed on the ins and outs of this part of the industry as I perhaps should be, but my sense is that those specifics will hardly matter within a year or two, because the streaming platform industry is on the precipice of a major re-structuring and, most likely, narrowing. The general rule of thumb thus far for these companies has been licensing agreements that they constantly jockey over with increasingly absurd sums of money — think of the way TV shows like The Office or Friends make the rounds from Netflix to Hulu to whatever else, based on the length of a licensing agreement and how much money a streamer is willing to put up for the IP involved (which will, of course, be way smaller for the smaller titles you refer to). As with many parts of this economy, this jockeying now looks more unsustainable than ever, and my somewhat-informed expectation would be that the biggest players — Netflix, Disney, Amazon — will continue to bundle up with smaller streamers (if not buy them outright), including their original productions alongside their existing licensing agreements, depending on contracts and other legal mumbo jumbo. Will this be helpful for the industry? I mean, the biggest companies will probably continue to do well, and have more money to make more dreck, so…
Zach asks: What do you think about the role or value of curation (and related ideas like trust & 'human authenticity') in the future for online platforms? In other words, I periodically see comments about how labor in the knowledge economy (in the wake of ML/AI and other modes of automation and algorithmic alignment) will place a higher premium on the trustworthy judgment of authorities that surpass the algorithm--i.e., not merely credentialed people, but people who've built up a following. Do you think this is an accurate assessment of the near future? Will it be a case of luxury services for an affluent consumer base, at best?
I’ve been thinking a lot about curation in this context in light of Kyle Chayka’s book Filterworld, and the many interviews he’s been doing about it on the podcast circuit. My take on this is pretty much Max Read’s, which is that
In general while reading this book I had trouble understanding precisely where stuff like globalization, paid marketing, cultural exchange, and vicissitudes of style--all of which seem like important phenomena in accounting for cultural “homogenization”--ended, and where the pernicious effects of “algorithms”--understood throughout the book and here as a metonym for the handful of platform conglomerates that mediate internet commerce and sociality--began.
I am prone myself to talking up the evil influences of platform companies and their algorithms, and how it feels like so much of what we consume online (and off!) becomes flattened as a result of this machine-led curation. It’s worth remembering, as Read suggests, that external forces persist, from the economic networks of globalization that exist outside of our social feeds to the endless nuances of trendsetting and social exchange that are, again, not as easy to understand as “algorithm say Stanley cup, I Stanley cup.”
There is a going theory, as you mention, that in light of this algorithmic culture, as intensified by AI, we will retvrn to more fully appreciating human expertise or knowledge that feels more authentic. I am not typically in the business of speculating about the future, as you ask of me here, but one growing avenue of this is certainly this luxury ecosystem for advanced consumers — think of the brief spur of “de-influencing” where popular influencers recommended what not to buy in a bid for authentic (and ethical) consumption. Of course, it’s still about buying stuff, just doing it smarter! There are countless subscriber-based services for a price that provide similar guidance in a world of overwhelming slop. There is something appealing, of course, about the story of human-based knowledge coming back in style as we get fed up with algorithms and AI garbage, but as long as it continues to exist on the platforms we’re already using, the same socioeconomic incentives built into them will prevail.
Megan asks: Does AI creep into your dreams? What do you do with it? How do you process it wrapping our words, ideas, stories, creeping into our imaginaries?
I love this question! For the first part, I can say it does not creep into my dreams. You routinely see people make analogies about AI, including how surreal it is, the dreamlike logic it seems to resemble, but to me this is a further misrecognition of what AI is, and just another attempt to align it with the brain (neural networks, etc). What AI outputs is aggregated mediocrity — Hito Steyerl’s mean images — whereas dreams give us specificity and personal resonance, something an AI model can never truly achieve. That aside, you obviously point to a larger phenomenon of imposition, which is the single thing that motivates my work on All Of This most forcefully. The imposition of technology in particular forms and to particular ends for particular uses is the original sin of Silicon Valley, we might say (I’m saying). My research in general seeks some corrective to this stifling of our imaginations, the way that everything we come to say about technology and far beyond is beholden to these impositions. The control over the imaginary is powerful and needs to be attacked, but it is also fundamentally unstable. These are only stories, like any other, and we can craft or impose our own. What does that actually look like on a practical level? Many things, and moving forward in my work, I want to explore the strategies that different groups are employing. For a start (self-promotion incoming), consider attending my upcoming event on March 26, “AI Futures: Can We Imagine Otherwise?”, hosted locally in Montreal but livestreamed on YouTube and Zoom for anyone to join. I have a great panel assembled with Paris Marx, Elena Altheman, and Ceyda Yolgörmez, and we will precisely be discussing how to tell new stories and reclaim our imaginations when it comes to AI.
Kevin asks: With the advent of Sora and other generative AI that can produce high-quality video content, what ramifications do you anticipate we'll see in Hollywood? Could we see feature-length presentations written and visualized by AI, and is there an avenue for unions and guilds to protect themselves as technology advances quicker than strikes and agreements can come together?
Again, I am wary to speculate too much, especially when so many industrial, economic, and political factors will play into how this develops, even putting aside cultural tastes and expectations. We can at least say that the Writers Guild and, to a lesser extent, SAG-AFTRA (their deal was criticized for not having enough protection against what are called “synthetic performers,” those not based at all on a real actor’s likeness), did achieve significant wins on AI during their strike action last year, which should at least make it very difficult for Hollywood studios specifically to let AI take over, but the scourge of Good Enough will prevail to some extent nonetheless, and others in the industry remain woefully unprotected, perhaps most significantly visual effects artists and animators, who may be the most at risk for AI taking over their jobs. As ever, playing too far into these fears also ends up being free advertising for Sora and other models, especially as we over-estimate what they’re capable of. What they can generate right now is certainly impressive, but still a long way from being able to even come close to outputting a convincing feature-length film. So we can’t simply accept this overriding fear simply because AI boosters insist that it will advance according to linear progress ideals, again the idea that it will inevitably end up creating convincing two-hour narrative features when it can currently only generate off-putting 60-second guesses. In the meantime, other industry craftspeople must organize, as the WGA and SAG did, as soon as possible, particularly the Art Directors Guild.
In the same vein, Duncan asks: The progression of film history and technology are intrinsically linked. As we seem to be approaching a plateau in terms of how Computer Generated Imagery can expand filmmakers' capabilities in telling stories, what technological advancements will most significantly impact how films are produced in the coming decades?
So many demands to speculate! But no, it’s a good question, I really just don’t know, to be honest with you. As mentioned in the previous answer, AI — without ceding ground to the boosters — does seem poised to change film production in various ways, and the shape that will take will depend on which further protections are achieved in the industry. I also take your point about the plateauing of CGI and digital effects work. I’m a longtime advocate for expressive and bold experimentations with digital filmmaking, your Inland Empires and your Bamboozleds, and am so disappointed with how the industry left all that behind for the bland digital nothingness we’re now inundated by. My hope would be that, similar to the earlier question about algorithmic curation, there is a retvrn to bizarre experimentation, even if it’s only to use it as a marketing tool.
And finally, Alex asks: Love to hear what you like/find interesting about Hong Sang-soo as a filmmaker and what you'd recommend as a starting point for someone uninitiated in his filmography.
Hell yeah. The common talking point for fans of Hong Sang-soo is that all of his films are incredibly similar, and the fun or beauty of being really into him is to get excited about the minor variations each time, and what they signify. This is broadly true of my experience, as well, though I do think it risks flattening the bold divergences he often takes. Yes, many or most of his films are about an older male filmmaker having long, meandering, drunken conversations with (younger) women or with friends, playing with autobiographical details and cultural specificities. Stylistically, he likewise often repeats himself with similar or identical setups and compositions, broken up by striking zooms. He is also, though, always interested in the interplay between stylistic and narrative structures, and sometimes he will mess with one, sometimes the other, sometimes both. One of his latest movies which I’ve yet to see, In Water, was shot almost entirely out of focus. Walk Up, from 2022, explores overlapping stories between overlapping characters living on different levels (and timelines?) in the same walk-up building. You could call these premises or visual approaches gimmicks, but their significance lies in the repetitions as well as within the severe differences.
For this reason, I typically recommend the first film of his I saw, Right Now, Wrong Then (2015), as a great introduction. The “gimmick” is easy to grasp, loaded with meaning and emotion, and it’s thrilling to watch it unfold. If it resonates with you as it did for me, you know you’re locked in. If not, maybe he’s not for you (in which case Hill of Freedom is an excellent, 67-minute mindtrip that you could try next, just in case). As you watch more and more of his films, as the common knowledge goes further, his entire body of work begins to shift into something of great scope, an accumulating sense of a single artist’s obsessions, perversions, insights, and revelations over an incredibly prolific career. Try it!
Thank you all so much again for your questions! If you liked this, maybe I’ll do it again in a few months. Thanks for reading!