Google and the University of California at Berkeley have created the PRIME deep learning algorithm to help develop fast and compact processors for processing artificial intelligence tasks.
The new approach creates an AI chip architecture based on existing blueprints and performance metrics.
The team said that chips made using the PRIME design method have up to 50% less latency than those built using classical approaches. Deep learning has also reduced the time to create blueprints by up to 99%.
The researchers compared the performance of PRIME-built chips with EdgeTPU accelerators in nine AI applications, including MobileNetV2 and MobileNetEdge image classification models. They stressed that the designs have been optimized for each application.
The PRIME approach improved latency by a factor of 2.7 and reduced die area by a factor of 1.5. This will reduce the cost of chips and reduce power consumption, scientists said.
In addition, the performance of AI-enabled chips was higher in all nine applications that participated in the experiment. Only three of them had a higher latency compared to the designs created by simulation.
According to the researchers, PRIME has promising prospects. This includes creating chips for applications that require solving complex optimization problems, as well as using drawings of low-performance chips as training data.
Recall that in June 2021, Google talked about using reinforcement learning to speed up the creation of chips from several months to six hours.
In October, the company introduced the Pixel 6 and Pixel 6 Pro smartphones with a tensor chip for machine learning of its own design.
In August, Samsung began using artificial intelligence to automate the process of developing computer chips.