When it comes to AI, I’m in disagreement with Hollywood auteur Paul Schrader.
The septuagenarian screenwriter of Taxi Driver is spellbound by AI. On Facebook in January, Schrader marvels,
“I’m stunned. I just asked ChatGPT for ‘an idea for a Paul Schrader film.’ Then Paul Thomas Anderson. Then Quentin Tarantino. Then Harmony Korine. Then Ingmar Bergman. Then Rossellini. Lang. Scorsese. Murnau. Capra. Ford. Spielberg. Lynch. Every idea ChatGPT came up with—in a few seconds—was good. And original. And fleshed out. Why should writers sit around for months searching for a good idea when AI can provide one in seconds?”
I’ll respond to his (admittedly rhetorical) question: Because such AI reliance poses existential threats, such as mass copyright infringement, widespread job displacement, and the dilution of human creativity in society.
Nevertheless, in regard to handling these existential threats posed by generative AI (genAI), Hollywood offers a particularly insightful case study—one that can provide invaluable lessons for executives in many other industries.
The Hollywood case study: AI’s impact on labor
Unlike some jobs poised to be upended by AI—e.g., copy editors, translators, paralegals, call center workers—many Hollywood jobs are protected by strong trade unions.
These unions certainly went to bat for their constituents in 2023. Facing existential threats from AI, the Writers Guild of America (WAG) and the Screen Actors Guild—American Federation of Television and Radio Artists (SAG-AFTRA) both went on strike in 2023. The 148-day-long WGA strike ended on September 26, 2023, and the 118-day-long SAG-AFTRA strike wrapped up a couple weeks later. We’ll cover each in turn.
The SAG-AFTRA deal and actors’ digital replicas
The 2023 strike was necessary because developments in the AI space were happening so quickly. Speaking at a Brookings Institute event, SAG-AFTRA national director and chief negotiator Duncan Crabtree-Ireland explains, “We couldn’t wait three years until our next contract cycle. It had to be dealt with now.”
A deal was eventually struck to ensure that actors had strong digital provisions, including the right to fair compensation and the principle of informed consent. According to the agreement, studios must get “clear and conspicuous” consent from actors before using a digital replica.
Essentially, if a studio wants to use an actor’s digital replica, there has to be a “reasonably descriptive explanation” of what they’re going to do with this replica. Legal language related to the use of such replicas cannot be boilerplate, and the explanation can’t be buried on page 32 of a contract.
Also, if digital replicas are employed, studios must compensate the actors for the amount of time it would have taken the actors to perform the scenes themselves.
As far as what constitutes “fair compensation” for the use of a digital replica, there will be a negotiation process between the actor and the studio. Importantly, the SAG-AFTRA agreement’s “digital replication provisions” include all actors, including background actors.
Although the agreement was successful, by no means do the lawyers at SAG-AFTRA feel they are out of the woods just yet.
Crabtree-Ireland says, “In terms of looking to the future, what we’re concerned about is the growing use of generative AI, and in particular the idea of creating synthetic performers.” By “synthetic performers,” he’s referring to the emergence of fully-synthetic actors who are not just replicas of existing actors.
The WGA and writers’ compensation
In regard to the writers’ strike, WGA lawyers had some leverage in the negotiations because creative works made entirely from AI are not copyrightable. According to the U.S. Copyright Office, a human has to “select and arrange” AI-generated content in a creative way in order for it to be eligible for copyright.
In the wake of the WGA agreement, writers now receive full credit for all written content—even if generative AI was used in the process.
Additionally, if a studio provides a writer with an AI-generated draft, this draft is not considered to be “source material”; hence, writers are entitled to full compensation (and writing credit) for everything. Writers also get paid for polishes and second drafts of AI-generated scripts. And lastly, they have agency in the process. Writers can decide to use genAI in their writing process; however, the studio cannot force them to do so.
Despite these legal wins, there’s still a great deal of consternation among Hollywood talent. Speaking with the Brookings Institute, writer David Goodman, who executive produces Family Guy, says,
“I do feel that this technology is impressive. At some point, it could literally generate what are seemingly original scripts. It’s very scary. As soon as the companies can get rid of writers, they will.”
Even if film studios don’t immediately get rid of writers, there is a fear that, due to AI developments, writing could morph into a less profitable enterprise. The job of a writer could turn into merely rewriting or polishing AI-generated scripts; this would be exponentially less lucrative for screenwriters, as rewrites garner roughly 25% of what original screenplays command.
Other lingering concerns: AI model training, derivative content, and biases
Despite the WGA and SAG-AFTRA deals, there is still cause for concern for Hollywood talent. Firstly, the contracts only cover the next 36 months. Secondly, no agreement has been reached regarding the use of copyrighted material to train AI models.
As of now, writers can request that genAI companies don’t use their copyrighted content to train models. That said, how exactly do writers know that their works are not being used for model training? They don’t.
Copyright issues aside, there are warranted concerns that AI will cause the quality of film and TV content to deteriorate. The technology might just churn out derivative and mediocre (yet serviceable) works. What’s more, seeing as genAI models are trained on old scripts, it stands to reason that these genAI-drafted scripts will contain the myriad biases from Hollywood’s past writing.
Although lucrative in the short-term, this removal of the human element is concerning. I agree with Crabtree-Ireland, who laments,”It’s a big concern from a labor union perspective, but also as a human being—and someone who cares about culture—what the impact of taking human creativity out of the cultural marketplace would really mean for our society.”
Recent genAI partnerships, the studios’ dilemma, and the future
GenAI companies are courting Hollywood studios. OpenAI is particularly keen to see studios using its video generation model, Sora. That said, so far, studios have been reluctant to enter into partnerships, although there are a couple of exceptions.
In mid-February, UK-based video effects (VFX) group DNEG acquired Metaphysic, which is an AI content creation start-up and a partner in the content provenance consortium, C2PA.
Another notable exception is Lionsgate. The mini-major has a partnership with New York-based AI start-up Runway. It’s a symbiotic relationship. Runway trains their genAI model on Lionsgate’s extensive catalog of film and TV shows; in turn, Lionsgate uses Runway’s AI to generate content for future projects (and reduce its VFX costs).
The studios’ dilemma is as simple as this: studios want to save money on production costs; however, they don’t want to alienate their talent and generate derivative content.
Also, some writers are furious that there is no language in the recent WGA agreement preventing genAI companies from training their models on writers’ copyrighted works. Speaking to the L.A. Times, The Killing showrunner Veena Sud says, “I’m stunned, disgusted, horrified at what is essentially straight-up plagiarism.”
Screenwriter John Rogers shares Sud’s anger. He laments, “These companies have gotten hundreds of billions of dollars of value that would not exist if not for our work.” Although no production companies have sued genAI companies on Hollywood talent’s behalf, some actors and writers are taking things into their own hands.
For example, Robert Downey, Jr. and Nicolas Cage have both voiced concern about using their digital likenesses. Downey, Jr. has vowed to sue anyone who uses his digital replica—even after his death. He says, “[I’ll be dead], but my law firm will still be very active.” Cage agrees with this sentiment, saying, “I mean, what are you going to do with my body and my face when I’m dead? I don’t want you to do anything with it!”
There’s even the distinct possibility that there could be fully-synthetic Hollywood stars down the line. As UVA economics professor and Brookings Institute fellow Anton Korinek says, “I think there is a significant risk that the actors of tomorrow, including the superstars among actors of tomorrow, may also be fully-synthetic.”
Hollywood’s ongoing AI battle is a forewarning for other industries
AI’s impact on the Hollywood workforce and labor policy may be a precursor as to how things play out in other industries. It is important to remember that despite some fear-mongering in the tech news media, the AI situation is not outside of human control. Humans ultimately steer the trajectory of AI and other emerging technologies’ impact on labor and industries.
AI isn’t inherently bad or good. Employed correctly, AI can lower costs for studios and make writers, actors, and VFX workers’ jobs easier; however, it’s important that workers remain a part of the conversation. This may require union intervention in some instances.
As AI assumes an increasingly significant role in more and more industries, it is vital that guardrails are put in place. After all, AI should augment (rather than replace) humans’ work.
Across all industries, there will be struggles between business interests; efforts to respect the historical human output; and challenges to maintain the integrity and quality of products. This was certainly true in the recent Hollywood labor struggle, and it will be true elsewhere.