You might have watched the movie titled iRobot by Will Smith, where machines made other machines to fight against human. It seems we might have a serious reason to worry about now. AI now is everywhere. Of course, it’s still immature. But who said machines will rise suddenly – it will happen over time. And it seems we will see the first steps of machines soon. Recently, we learned that Google is using AI to design better AI chips. In other words, AI improves itself.

Google AI

Google has already published a paper in Nature talking about a technology, which is used to teach AI to design new AI chips. The latter will be faster and more energy-efficient than their predecessors.

It takes only six hours to create the design of the next-gen AI chip. Six hours, folks! As for human engineers, they need a few months. At first sight, it seems machines will do the job much faster and help engineers reach the goals sooners. But who said that AI wouldn’t use the new AI chips for its own purposes.

Is AI Design Better Than AI Chips?

Google informed that it had used the new software to make its latest tensor processing unit chips. Actually, the new technology decides where the various components of a System-on-Chip would reside to maximize processing speed and reduce power consumption. All components such as the CPU, GPU, and memory chips are placed on the silicon die and connected.

Initially, Google showed the algorithm 10,000 chip floorplans so it could understand and learn what works best in chip design. As a result, the AI algorithm created designs that humans might not think of. Say, people would lay out components in neat lines. Unlike this, Google’s AI used a scattered approach to design a better SoC.

However, this is not the first time that Google uses AI to make things better. A few years ago, it used different AI algorithms to make its computers beat the best Go players in the world.

Well, at the beginning, we were talking about machines that work independently. But in fact, Google’s AI does not work on their own. These machines are taught to execute a particular task. And over time, they just get better and better.

In this regard, Facebook chief AI scientist Yann LeChun admired the breakthrough on Twitter:

“Very nice work from Google on deep RL- based optimization for chip layout. Simulated annealing and its heirs are finally dethroned after 40 years. This uses graph NN and deConvNets, among other things. I did not imagined back in the 90s that (de)ConvNets could be used for this.”

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Captcha verification failed!
CAPTCHA user score failed. Please contact us!