/Why Did Intel Develop The Loihi 2 Neuromorphic Compute Tile (via Qpute.com)
Why Did Intel Develop The Loihi 2 Neuromorphic Compute Tile

Why Did Intel Develop The Loihi 2 Neuromorphic Compute Tile (via Qpute.com)

A Little Light Neuromorphic Computing For Friday

Intel have revealed details about their Loihi 2 Neuromorphic Compute Tile, a completely different take on computing.  Loihi 2 does not process data like the familiar silicon powering the device you are reading this on, nor does it handle data like the deep learning machines that most HPC is focused on, nor does it resemble the quantum processors which are starting to evolve into something more than just proof of concept devices.

The Loihi 2 is instead a new type of neural net on a chip, which seeks to emulate the neurons, axons and dendrites found in the noggins of most of the creatures you encounter over the course of your day.  With quantum computers a a set of qubits are specifically set up to solve a specific problem and use the rules of quantum physics to instantly produce an output which is assigned a probability of being correct.

Quantum computers are incredibly powerful at solving the problem they were designed for, and can produce an instantaneous output which, on a traditional computer could take thousands of hours of processing to arrive at.  The problem is that there is no flexibility in the system, the qubits need to be specifically configured for each and every problem feed into the system.  There is also the problem of cooling, even an LN2 setup is not enough to keep that type of system in a stable state.

Traditional deep learning models are also quite different than neuromorphic computing, the HPC devices we’ve seen from NVIDIA and others consist of a large network of artificial neurons which react differently to inputs depending on how and where they were set up.  In order to produce more detailed results the network needs to be made larger, or as the name implies, made deeper, which results in increased costs and heat production.

The other challenge to produce decent results from a neural network is that it needs to be trained on the task assigned to it.  For instance, convincing it to properly process and identify objects in an image or video requires a long training session which, when completed, will allow it to effectively process and recognize the objects it was trained to.  What it won’t be able to do is identify things it has not been trained on, or even those things it was trained to recognize if they are partially obscured or if the object is at an angle they were not trained on.

That lack of flexibility inherent to standard neural networks is exactly what Intel is attempting to avoid with their Loihi 2 Neuromorphic Compute Tile.  This new architecture is designed with flexibility as the primary goal, with the accuracy of the results as a secondary goal.  That is not to say it will spit out answers which are completely wrong, instead Intel is aiming for a result which is just good enough.

The example Ars Technica quoted is about how the performance of an autonomous robot will change over time, as parts wear and joint friction increases.  Currently, those changes are either ignored, the robot repaired as it becomes unable to function because of excessive performance differences, or the deep learning algorithm controlling the robot needs to be generated anew and installed.  With the Loihi 2 in control of the robot, theoretically it could detect the changes in the physical capabilities of the robot and generate new control algorithms which take the changes into account, without requiring a programmer to intervene.

If you are more interested in the hardware and software as opposed to the theory, a visit to ServeTheHome will be profitable.

This is a syndicated post. Read the original post at Source link .