What if you could dub Morgan Freeman’s voice into any language while retaining the actor’s iconic pitch and tone?
That’s the idea behind deepdub, an Israel-based startup that aims to make a name for itself in the dubbing industry. The company, which made headlines after former WarnerMedia executive Kevin Reilly joined its advisory board earlier this month, claims that it can dub films, television shows, and video games via a technology that retains the sound of the original actors in any given project.
Dubbing might be an expensive and time-consuming process, but it’s become increasingly vital for film and television distributors looking to take their products to international markets: Netflix announced in its Q4 2020 earnings that the vast majority of its recent subscriber gains were from outside the United States, while China’s influence on the global box office has grown exponentially in recent years. (Last week “Lupin” became the first French-language show to land on the streamer’s U.S. Top 10 list.) The demand for high-quality dubbing has never been higher and deepdub posits itself as the solution.
“The technology is based on deep learning and artificial intelligence based on neural networks,” Oz Krakowski, deepdub’s chief marketing officer, told IndieWire in an interview. “Basically, you give it an audio track of a video or movie or series. Ideally, the voice in the audio track is isolated. If not we have to go through and isolate the audio, which is something we can do. The deep learning machine learns the traits of the voice, like pitch, timber, the speed, the spacing, and intonation of the words. It learns and registers them and can then apply it to new voices. We can take a voice and add or take away an accent or alter it to give it emotions or make it sound younger or older. It gives us the ability to take a voice and apply it to a different language.”
Although deepdub has yet to prove itself via publicly-available products, the potential behind its pitch is apparent: The company shared a private tech demo with IndieWire that centered on Freeman. The actor opened the demo by speaking in English before being dubbed into Spanish and other languages, and though the language changed, the narration always sounded like Freeman’s natural voice. (The company also has public demo, where the narration for an unscripted car show seamlessly transitions from English to Spanish, German, and French.)
Though Krakowski claimed that deepdub can dub an actor based on just a few voice samples, the company’s technology has yet to be publicly used on a mainstream film or television show. deepdub has yet to announce any deals with major streaming services or other distributors — Krakowski said deepdub was in talks with several major entertainment businesses — but the company already has the support of one well-known Hollywood shaker via Reilly, HBO Max’s former chief content officer, who joined the company’s advisory board several weeks ago.
“Having led some of the world’s most recognized entertainment brands, I can clearly see why deepdub’s technology is a game changer for studios, content creators and producers,” Reilly, who was not available for an interview, said in a statement. “Upon learning more about the organization and the team behind it, I knew immediately that they were on to something deeply exciting, and that I wanted to be a part of, especially as entertainment content becomes increasingly globalized. I look forward to facilitating their success and expansion.”
If Hollywood’s major companies take a liking to deepdub, the company’s technology could become a game-changer for the dubbing industry — as well as a major disruptor. The company, which has around 10 employees, markets itself as a more efficient, affordable, and accurate solution to the traditional dubbing industry, and its success could potentially cause employees in the dubbing industry to lose their jobs. Reilly noted that deepdub could “upend the process” of dubbing in a recent interview with Deadline but argued that it could also bring positives to the industry, such as by allowing actors’ voices to be syndicated in other languages.
Krakowski echoed Reilly’s syndication idea and stressed that voice modulating technology such as deepfakes was already poised to make an impact on the entertainment industry. He argued that it was important for companies like deepdub to begin working with key Hollywood businesses to monetize the technology.
“There’s always the concern when you bring something totally new and disruptive to the current industry, we expect that concern to go up. At the same time, such level of disruption is to some extent inevitable,” Krakowski said. “We see the deepfakes and all the AI that is coming to the market and all around us. It’s coming here and the idea is to get in front of it and that’s why we’re trying to work with big studios to get in front of it and embrace it… With a disruptive technology, you may be running over one role but you are creating new opportunities for others.”
The appeal of faithfully recreating famous actors’ voices in other languages aside, deepdub is also marketing itself as an efficient alternative to hiring voice actors to dub individual films or television shows. Krakowski claimed that deepdub can currently dub an eight-episode television series in six weeks, which would outpace the ordinary 14 to 16 weeks it takes for most companies to dub the same show via traditional methods. If deepdub’s technology catches on, it could help reduce piracy due to the quicker turnaround for dubbed content, according to Krakowski.
“One of the struggles we hear from companies is that when they release important content, they have to fight the time window between releasing in the United States and releasing in other regions,” Krakowski said. “The bigger the time window is, the more they open themselves to piracy. With the ability to shorten the time dramatically and expand to regions with localized versions, they’re improving their way of fighting piracy.”