Research and Markets report indicates the neuromorphic computing market is poised to grow to USD 1.78 billion by 2025.
Does this mean neuromorphic computing is set to rule the future?
According to Moore’s law, the number of transistors on a microchip doubles every two years. However, it is now losing its validity. This is where the emergence of intelligent AI technologies makes an entry, thus, the neuromorphic computing.
The idea behind ‘neuro’ defines a way to develop computer chips that can behave like human brains. Not to mention, there have been quite some spellbinding advancements happening around neuromorphic computing.
For instance, a team (scientists) from the University of Michigan developed a “memristor” on May 22, 2017 – it is a computer circuit prototype that could imitate the way mammals respond to their brains.
Precisely, the use of a traditional computer is becoming less reliable. Without innovation going on it gets challenging to move beyond the technology threshold. Thus, it is important to bring the necessary design transformation with improved performance to change the way computers function.
Neuromorphic computing is the combined effect of electrical engineering, computer science, mathematics, and biology capable to develop technology capable of sensing and processing similar effects as the human brain does.
The key research to neuromorphic computing
The first generation of intelligent AI technology was to be able to draw reasons and conclusions within a defined and specific domain. The second generation extends to moving beyond corresponding human cognition like autonomous adaptation and recognition. However, the next generation of AI must be able to address situations and abstraction that easily automates ordinary human activities.
Intel Labs is driving its way contributing to the third generation of AI which is, the key areas focusing on neuromorphic computing. This includes areas like the operation of the human brain, emulating neural structure, and probabilistic computing. This helps in creating algorithmic approaches to help deal with critical circumstances such as ambiguity, uncertainty, and contradiction in the real world.
The key challenges in neuromorphic computing match human flexibility and the ability to learn from unstructured stimuli having possessed the energy efficiency of a human brain. The computational building blocks present in neuromorphic computing are analogous to the neurons. Having spiked the neural networks could be a novel model in arranging the elements that can help emulate these natural neural networks that are present in the human biological brains.
Each neuron present in the spiking neural network can be fired independently of the others. Doing this further signals the other neurons in the network to change the electrical states of those neurons present. By encoding the information within the signals and the timing, the SNN simulates the learning processes by remapping the synapses between the artificial neuron, thus sending a response to the stimuli.
- IBM’sTrueNorth Neuromorphic Chip
Under DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program, scientists at IBM developed one of the largest and complex computer chip the world has ever produced. A chip inspired by the neuronal structure of the brain that requires just a fraction of the electrical power of a conventional chip.
“Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real-time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.”
~ Merolla et al
Intel produced its fifth-generation self-learning neuromorphic test chip in November 2017. It is a 128-core design especially optimized for SNN algorithms and is fabricated on a 14nm process technology. Loihi supports the operations of SNN and does not require conventional methods of convolutional neural networks for any kind of neural network to become smarter in the future.
The Loihi chip is built in around 131,000 computational neurons to communicate with other neurons.
In the future, AI will play a much revolutionizing role to shape the potential of neuromorphic computing. From transforming its scalability, size, efficiency, design, architecture, and scope. With rapid advancements taking place in neuromorphic computing, the future of AI looks bright.