IBM develops the world’s first energy efficient AI accelerator chip based on 7nm technology
A team of researchers at IBM claims to have created the world’s first artificial intelligence (AI) accelerator chip based on 7nm technology.
The new chip, which features four cores, was unveiled at the 2021 International Solid-State Circuits Conference held virtually earlier this month.
The details of the chip were disclosed this week in a blog post by Kailash Gopalakrishnan and Ankur Agrawal, both staff members at IBM research.
According to researchers, their novel AI accelerator chips chip supports a number of model types while achieving superior power efficiency on all of them.
AI accelerators are a special type of hardware designed to boost the performance of AI applications, specifically deep learning, machine learning and neural networks.
These accelerators focus on in-memory computing or low-precision arithmetic, both of which help speed up execution of large AI algorithms, thus leading to better results in computer vision, natural language processing and other domains.
According to IBM, its new AI chip is optimised for low-precision workloads for a range of machine learning and AI models.
Th researchers have found that use of low-precision techniques in accelerator chips can help boost deep learning training and inference, while also requiring less power and silicon area.
That, in effect, means that the amount of energy and time needed to train an AI model can be reduced significantly.
The IBM researchers say the technology can be easily scaled and used for different commercial applications such as large-scale model training in the cloud, security, etc.
“Such energy efficient AI hardware accelerators could significantly increase compute horsepower, including in hybrid cloud environments, without requiring huge amounts of energy,” said Agrawal and Gopalakrishnan.
They say their chip is the first to use an ultra-low precision hybrid 8bit floating point (HFP8) format for training deep-learning models in a silicon technology node (7nm EUV-based chip), and have demonstrated better power and performance results compared to other dedicated training and inference chips.
The HFP8 format was invented two years ago at IBM as a way of overcoming the limitations of the standard 8bit FP8 format, which gives better results when training specific standard neural networks but results in poor accuracy when training others.
“We hope that through this work, we can establish an entirely new way of creating and deploying AI models that scale performance and cut power consumption,” the researchers said.
Competition for workload specific chipsets is hotting up. In 2019, Intel unveiled its first artificial intelligence (AI) chip, named ‘Spring Hill’ or Nervana NNP-I 1000, claiming that it offers the best in class performance/power efficiency (4.8 TOPs/W) for major data centre workloads, while also offering 5x power scaling for the performance boost. Nervana NNP-I 1000 is based on a 10nm Ice Lake processor, Intel said, and is intended for large computing centres.
IBM’s unveiling of its AI chip comes less than a month after a joint team of researchers from Microsoft and the University of Sydney claimed a novel breakthrough in the field of quantum computing with Gooseberry chip and cryo-computing core.
This is a syndicated post. Read the original post at Source link .