James Earl Jones done as Darth Vader, but his voice will live on because of AI

darth vader in costume
Photo by Matthew Lloyd/Getty Images

“I am your father” are four of the most famous words ever spoken on screen. When Darth Vader shattered Luke Skywalker’s world in “The Empire Strikes Back,” he sent shivers down the spines of audiences everywhere—in large part because of actor James Earl Jones’ famous baritone.

headshot of rupal patel
Rupal Patel, professor of communication sciences and disorders at Northeastern. Photo by Ruby Wallau/Northeastern University

Now, Jones, 91, has announced he is hanging up the mask and retiring as the voice of one of the most infamous cinematic villains. But don’t despair: Although Jones will no longer record new lines for Star Wars projects, the character—and Jones’ voice—will live on thanks to artificial intelligence.

As first reported by Vanity Fair, Respeecher, a Ukrainian voice synthesis company, will use a combination of archival recordings, voice acting and AI technology to continue bringing Darth Vader to the screen. 

This is just the latest example of how vocal AI is making its way into Hollywood—and reshaping the industry in the process. Respeecher has already used this technology in the Disney+ miniseries Obi-Wan Kenobi to create a Darth Vader that was closer to the version of the character in the original Star Wars trilogy. And Sonantic, another voice synthesis company, recently worked to recreate Val Kilmer’s voice for an emotional moment in “Top Gun: Maverick.”

As use of the technology has expanded, it has also raised questions about how AI will impact actors and their work, the entertainment industry and its reliance on well-known intellectual property—and our understanding of the human voice in general.

When it comes to Respeecher, concerns about humans vs. machines are a little more complicated. The company uses what is called a speech-to-speech approach, as opposed to text-to-speech. This technique involves layering a human actor’s voice performance to modulate an AI voice engine that has been trained on archival audio of a specific voice. In this case, the result is a voice that sounds like Jones’ Darth Vader but has the inflection and melody of a human voice actor. 

“STS models don’t require a famous talent to generate the final audio, but they do require someone whose delivery can be used to ‘breathe’ life into the voice model,” says Rupal Patel, professor of communication sciences and disorders at Northeastern.

Northeastern Global News, in your inbox.

Sign up for NGN’s daily newsletter for news, discovery and analysis from around the world.