What Google’s innovative new translation engine means for AI

by - 13 February 2017
You are currently viewing What Google’s innovative new translation engine means for AI

In September 2016, Google updated their Google Translate service with a new engine: the Google Neural Machine Translation system (GNMT). If you hadn’t heard of this, don’t worry – you’re not alone.

The development flew largely under the radar until the beginning of this year, when blogger Gil Fewster wrote an article for Medium called The mind-blowing AI announcement from Google that you probably missed.

It has since been shared thousands of times, and has built quite a hype around Google’s translation engine. For instance, take a look at the following excerpt:

And — here’s the bit that should make your brain explode — it got creative.

Google Translate invented its own language to help it translate more effectively.

What’s more, nobody told it to. It didn’t develop a language […] because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.

Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.

That is, quite understandably, an awe-inspiring claim. Unfortunately, it’s… not entirely accurate.

The author was swiftly called out on his laymanship by people familiar enough with machine learning and artificial intelligence (AI) to accurately gauge the impact and implications of Google’s new translation engine.

While Fewster has added an informed disclaimer to his article, it’s worth digging into what GNMT has actually achieved, and what it means for the wider artificial intelligence and machine learning community.

The two kinds of artificial intelligence

In technology circles, there are two primary approaches to AI, symbolic and non-symbolic.

Symbolic AI involves computer system archiving information that it can draw on and present when needed – much like in the Chinese room thought experiment

Non-symbolic AI, on the other hand, attempts to mirror our own thought processes by having the computer “think” for itself. Instead of trying to match an answer to a question based on programmed rules, non-symbolic AI gives the best answer it can based on everything it has learned up until that point.

It does this using an Artificial Neural Network (ANN) that, over time, remembers how concepts relate to each other, much in the same way a child learns about the world.

How does Google Neural Machine Translation tie in?

The blog post from Google that Gil Fewster based his article simply detailed the success of the non-symbolic AI used in their new translation engine – you guessed it: GNMT.

GNMT had successfully translated between Korean and Japanese, even though it had never been taught Korean ⇄ Japanese translations. It achieved this by identifying commonalities in Korean ⇄ English translations and English ⇄ Japanese, and then extrapolating the appropriate Korean ⇄ Japanese translations using its own “interlingua” representation of those commonalities.

This is, without a doubt, an amazing achievement by the Google Translate team. In their words, “To the best of our knowledge, this is the first time this type of transfer learning has worked in Machine Translation.”

While it is a great sign of progress for AI and machine learning technology on the whole, it doesn’t quite live up to the game-changing, Blade Runner-esque hype of Fewster’s original article. As he wrote in his ensuing disclaimer, “…while impressive, what Google have achieved is evolutionary, not revolutionary.”

PowerNET can help you increase business productivity with streamlined processes and state-of-the-art IT. If you’d like to see how your company’s internal IT processes measure up, simply take our free IT health check.