By Jacques Barreau*

Lately, everyone is talking about Artificial Intelligence. That’s because AI can morph your voice into someone else’s voice. AI can replace actors with synthetic voices. AI can also generate audio with Text to Speech. And finally, AI can now process a face transformation of the original actor to follow dubbed audio tracks on screen.

But what about human emotions? When we speak, and even more when we act, we can produce an infinite number of nuances to express ideas, a little like all the shades of gray between black and white. How many emotions could AI reproduce? We could very well see a simplification of the “art” of acting in dubbing due to the AI capacity to reproduce only a few basic emotions. So far, AI is very limited in this field, but as we’ve been seeing in recent months it learns very quickly. But will AI be able to “interpret” a character the same way an experienced actor does? Some voice actors are able to add a personal touch in their interpretation and follow the director’s intent at the same time.

As I like to say, we don’t dub to a language, we dub to a culture. So should we rely on AI to adapt dubbed dialogue to a culture? Or is this a domain where the human feeling is still very important to avoid a systematic approach that could also be predictable? If you watch the same theater play but performed by different actors, each performance will vary as each actor will add his or her personal touch while staying true to the text. This is probably where human versus AI differs the most.

The emergence of streaming platforms created a bottleneck in the distribution process. Timelines became shorter and more content therefore needs to be dubbed each and every day. The same way Machine Translation didn’t jeopardize the human translation business — even though everyone was sure of the opposite at the time — AI will probably not put the human dubbing business at risk. AI should be seen as a great opportunity to do what humans have difficulty doing: dubbing a lot of content very fast. Text To Speech (TTS) could be used to do many projects (like e-learning and documentaries) while actors could concentrate on projects where all the nuances of different emotions would be needed to convey all the attitudes of the original actors.

Additionally, with voice morphing, a small pool of actors could dub more characters as their voices could be changed after their recording. Is this a threat for all actors? It could be, but not necessarily, as the demand for new actors is simultaneously growing very fast. The best actors — and this is currently the case as a small group is doing the majority of the work — will be able to concentrate on the acting and AI will be able to change their voices in such a way that they will not be recognized.

Each time a new tool is introduced, the reaction is usually that of suspicion.  This is exactly what happened with the digital revolution, when the CD replaced vinyl, when recording was done with a computer instead of using tape, or more recently, with the digital “rythmo” band used in markets other than France, where many actors didn’t want to abandon their beloved paper scripts during the recording. In all these examples, the new technology gained an important place in new processes.

So, instead of fearing that AI will replace actors in the dubbing process, we should wonder what will AI be able to do for the dubbing industry? What tasks will AI be doing in order to dub faster while keeping human actors, directors, and writers involved in the most creative steps of the process?

First, AI could be a great teacher. In emerging countries where our company recently started to dub, including Kenya, Vietnam, and Morocco, AI can create an interest in the dubbing business for a young generation. Then, AI could be a great quality control expert with all the reference models it has available.

On the same topic, AI can now generate pictures in the style of known painters; could it be able to generate speech in the style of famous actors?

AI can also generate text, which is part of the growing fear of all the writers striking in Hollywood. This time around, the strike is not only about salary raises, but also about the use of AI to generate text that could replace writers’ work. The writers’ dispute is about not using AI and not training AI with their previous scripts. This is the most important point in all of these discussions as AI only exists because we are feeding it with millions of scripts, voices, pictures, etc.

The image, but also the voice, is part of an actor’s likeness, and no one can use it without his consent. Just as writers don’t want their scripts to be used to train AI engines, actors don’t want AI to reproduce their voices. This could result in less work for them, so it makes sense that they wouldn’t give the rights to use their voices (for a variable amount of money depending on a given actor’s notoriety). Their voices would be recognizable in the case of morphing, or not if they become a component of synthetic voices.

The main issue is that there is no regulation on this matter and the U.S. government is seriously considering creating new rules as many already exist in the dubbing field globally. At the same time, United Voice Artists, a worldwide group of voice acting guilds, associations, and unions, is officially calling on the European Union to address “the need to adjust the protection of artists’ rights and General Data Protection Regulation (GDPR rules), with the development of AI technologies in Europe.” If rates for a dubbing actor are set in most dubbing countries, why don’t we have rates for actors who choose to give their voices to train AI? In the TTS or morphing cases, it will be easy to recognize the source, but in the case of synthetic voices, which are generated from a huge database of voices, who will own the rights if no one can identify the different components of these new voices? This will be a real challenge for all legal teams worldwide.

In conclusion, will AI become more human than humans? We don’t know what new applications will be discovered using AI, but we can say that AI will be part of our lives in the future and we won’t be able to change that. AI will certainly progress on all the points above, but it cannot be seen only as a threat to the dubbing community. In fact, we all hope that it could be a great teacher and a great help for the new generations to enter into the wonderful world of dubbing!

*Jacques Barreau is considered the Dean of Dubbing and he’s Vice President of the Barcelona, Spain-based Media & Interactive Entertainment a division of the New York City-headquartered TransPerfect Media.

Audio Version (a DV Works service)

Please follow and like us: