Big Data needs localization
When localizing any offering for a specific country or region, translation of content is a big part of it.
US-based independent market research firm, CSA Research, recently held a survey of 8,709 consumers based in 29 countries (across Asia, North America, South America and Europe) and found that 76% of online shoppers prefer buying products that are advertised in their native language. Thus exists the business need to generate multilingual content (in the form of websites, apps, videos, emails, podcasts, webinars, you name it) for global campaigns. But translation is required in other industries as well. Travel companies need it to service international tourists. Entertainment companies, like Netflix, need to scale up content in multiple languages for a global audience. Even politicians need meticulous translation of sensitive content to support diplomatic foreign relations.
AI-powered tools are breaking the language barrier
In the quest to serve localized content that resonates best with audiences, Google recommends businesses to use automated translation software in tandem with native human translators.
While humans are better placed to churn out authentic translations, dependence on humans alone would be impossible to meet the speed and scale of content translations that exist today. Imagine if humans had to translate a full E-Commerce website from one language to another, with content on it being dynamically updated every day. Or consider Facebook, where global users post content in more than 160 languages across billions of posts. It is machine-led, AI-driven solutions that are catering to all this high-volume content scalability, while offering high-speed competitive advantage to users.
Tech giants Google, Apple, Microsoft etc. have invested billions in updating their neural machine translation (NMT) software such as Google Translate, Apple Translate and Microsoft Translator over the years. Sprouting from the base NMT technology, hundreds upon hundreds of translation applications have flooded the market – Tarjama, Unbabel, Babylon and IBM’s Watson to name just a few – generating anywhere from very basic to very advanced translated output.
Still, machine translation needs to evolve to native
Languages are intricate and complex. We have everyday phrases like ‘How are you?’ and ‘Where is the hotel?’ but we also have highly specialized terminology, idioms and nuances to deal with, not to forget different dialects.
To add to the complexity, there is sarcasm, irony, poetry and other contextual uses which human intelligence is better equipped to understand and decipher. The challenge, therefore, lies in the ability of machine software to become sophisticated enough to handle complex translation requirements in the future; to improve translation productivity and deliver reliable translations so that little manual clean-up is required.
Perhaps in the near future while we depend more and more on AI for our translation requirements, we will need to meet it half way, by keeping the style of language we use in our content simple for machines to translate effectively – meaning minimum use of flowery, formal or complex wording. On the flip side, AI algorithms will need to get much advanced and neural networks will need to be programmed more smartly so they are able to recognize and translate all the complex patterns of data that already exist in the languages we communicate with.