Google Translate has been making use of machine learning to deliver more accurate translations for about two years now. However the algorithm is now trying to translate phrases or even random mishmash of words that do not make any meaningful sense to human into sensible ones and to a certain degree frightening messages.
ALSO READ:Â Microsoft, Google And Others Team Up on Data Transfer Project
The defect caught Motherboardâ€˜s eye, which reports that Google Translate is converting senseless words into well-structured sentences. Some examples include multiple repetitions of words like â€œdogâ€ and â€œagâ€ which Google Translate recognizes as coming from certain foreign languages.
While some users have credited this to unearthly and demonic powers, subreddit named â€œTranslateGateâ€ suspects that these could be due to the text learned by Google by peeking into private messages and emails.
On the other hand, Google spokesperson has denied this possibility, claiming
â€œGoogle Translate learns from examples of translations on the web and does not use â€˜private messagesâ€™ to carry out translations, nor would the system even have access to that content. This is simply a function of inputting nonsense into the system, to which nonsense is generated.â€
It is entirely possible that these random and vaguely striking outputs are fed by miscreants and jilted employees at Google. Alternatively, these could be due to troublemaking users who are misusing the â€œSuggest an editâ€ button. But such an intrusion would not escape Googleâ€™s radar.
But experts suggest that it is possibly due to the neural network trying to make sense out of discrete information. This is part of commercial neural networksâ€™ tendency to find order in chaos. It is likely that Google used religious texts â€“ the Bible, in particular â€“ to teach languages like Maori or Somali to its neural networks.
the google translate demon is an intellectual pic.twitter.com/1eg0vSRCj5
â€” jase || 6 days (@postupcabello) May 27, 2018
Andrew Rush, an assistant professor at Harvard says,
â€œThe vast majority of these [black boxes] will look like human language, and when you give it a new one it is trained to produce something, at all costs, that also looks like human language.â€œ