The Issues With Machine Dubbing For Asian Languages

Although dubbing in film and television has been around since the 1950s, machine dubbing is a much more recent phenomenon. There’s been a massive proliferation of video content all around the world. This is because the process of globalization has meant that one culture can enjoy another culture’s video and film productions and vice versa.

Borders and boundaries are shifting and shrinking and we have access to so much more “foreign” content than we’ve ever had. But machine dubbing doesn’t come without its challenges.

It’s not a perfected art and is by no means so especially when it comes to Asian languages. Why is this the case? There are several reasons which we’ll explore below. Ready to find out more? Let’s take a look.

What is Machine Dubbing?

In the field of translation, and in particular, when it comes to film, video and television series, there are several different formats or branches of translation that should be considered. Yes, there is what is called a direct translation.

But there are other forms or branches as well. These include localization, which is essentially the process of taking a text from the original language and translating it in such a way that the intended meaning and nuances are carried through as well.

A part of the translation tree also includes dubbing. In short, this is an audiovisual translation that takes dialogues, translates them, and enacts them in such a way as to give the impression that the media is in the target language.

One sticky part with this is ensuring that the timing the dubbing actor has is used optimally – in other words, he or she needs to convey speech for as long as the actor in the original piece. Another challenge is the alignment with lip movements. All this gets even more complicated when machine dubbing comes into the picture. So, what is it?

You may also like:  Asian loanwords in English: Chinese

Machine dubbing is taking the capabilities of machines and introducing them to the translation-localization-dubbing process. Some sources indicate that the process is divided into several components. First, the video is transcribed from audio to text by a Speech-to-Text API. Then, that text is taken and translated by a Translation API.

Finally, the translation is uploaded through a Text-to-Speech API. Overall, if a human were to do all this, it would take hundreds of hours and just for one film or video. But with the elegance of machines, the process can be simplified, expedited, and made significantly more convenient for viewers.

Issues With Auto Dubbing

While all that’s been said above sounds quite simple and straightforward, there are several issues to consider.

As a starting point, during the Speech-to-Text transcription, errors can be made. This will mean that the Translation API will be working with inaccurate or awkward translations. Next up, the translations can be mispronounced and convey a different meaning, tone, or nuance.

But there are other challenges, too. These include:

  • The need to match the voice as well as the tone of the character that’s being dubbed;
  • Using translations that fit the time span of the original speech; and
  • Achieving a realistic alignment of the dubbed audio with the lip movements of the actual actors.

Language-Specific Problems With Dubbing

It’s clear that the Asian continent consists of multiple language groups, cultures, and subcultures. These special nuances need to be taken into account when it comes to machine dubbing, too.
Some examples of how this comes into play are in the following scenarios:
Machine dubbing

  • Japanese and English word order in sentences is by far not the same. This means that a poor machine dubbing scenario could produce a jumbled set of words, fewer words, more words, or if idioms are used, convey a completely different meaning.
  • Korean honorifics don’t have a direct English translation. When a sentence ends in the Korean “-yo”, it’s a sign of respect and politeness. There is no such division in the English language. One example of similarity may be with the use of the German “Sie”, but again, this fails to take into account the age levels of the Korean person, the relationship with the person, and how much authority they have so that the correct honorific can be used.
  • In Thai, there is little to no punctuation used in sentences so figuring out where one statement ends and another begins could pose a further challenge to machine dubbing.
You may also like:  DNNs for Kannada-English Machine Translation: A Breakthrough

These are just some examples of challenges that can be faced. Overall, it’s been argued that there’s not enough of a repository of Asian languages to create a strong enough resource corpus. However, with the passage of time and continuous technological advancements, this is slowly changing.

Asian Languages And Machine Dubbing: It’s Not That Clear-Cut

Whether Chinese, Japanese, Korean, Hindi, Thai or something else, each language group comes with a rich set of cultures that embody it. It is often very difficult to translate a piece of specific Asian text to English or English to a specific Asian text, sometimes because a direct translation is simply impossible.

A substitute text or a translation that conveys the intended meaning as well as possible will be the next best thing. This is where localization comes in and this is clearly needed when it comes to machine dubbing, too. A direct translation and subsequent uploading as Text-to-Speech will simply render an awkward at best and annoying and frustrating at the worst piece of video, whose entire intended meaning could be lost or go awry.

To avoid frustrating viewers, it’s critical for machine dubbing to take into account some of the challenges and issues posed by translating and localizing Asian languages and build a strong resource base for these languages so that future machine dubbing becomes more fruitful and of a higher quality.