Society, and in turn businesses, need to find more sustainable alternatives to the energy-hungry computing that has gotten us to where we are today. I think it’s time to look for inspiration in the most efficient and powerful computer of all: the human brain. Interestingly, the emergence of neuromorphic computing, which mimics our neural systems, promises extraordinary performance and transformative energy efficiency.
Before I address its broader benefits and potential applications, let me put that energy savings into perspective. Conventional computer technology is based on the so-called von Neumann architecture, in which data processing and transfer are carried out intensively and continuously. Next-generation computers are expected to run at exascale with 1018 calculations per second. But the disadvantage is power consumption.
Computing and data transfer are responsible for a large portion of this consumption, and the rapid development of machine learning and AI neural network models is adding even more demand. Up to 10 megawatts of power could be used for some AI learning algorithms on an exascale computer. Data-centric computing requires a hardware system revolution. Computing system performance, particularly energy efficiency, sets the fundamental limit of AI/ML capability. As for neuromorphic computing? It has the potential to achieve HPC and still consume 1/1000th of the energy.
The neuromorphic approach uses artificial silicon neurons to form a spiking neural network (SNN) that performs event-triggered computations. There is a key difference between an SNN and other networks, such as the convolutional neural network (CNN). An SNN is made up of artificial silicon neurons and performs event-triggered calculations. Spike neurons process input information only after receiving the incoming spike signal. Spiking neural networks effectively try to make neurons look more like real neurons.
The process does not work in discrete time steps. Instead, it takes events over a time series to help generate signals within neurons. These signals accumulate within neurons until a threshold is exceeded, at which point the computation operation is activated.
Ultra-low power operation can be achieved because SNNs are effectively in “off” mode most of the time and only come into action when a change or “event” is detected.
Once in action, you can achieve fast computation without running a fast clock that consumes power by triggering a large number of parallel operations (equivalent to thousands of CPUs in parallel). Therefore, it consumes only a fraction of the power compared to CPU/GPU for the same workload.
That’s why the future of neuromorphic computing is well suited to edge AI: deploying low-power AI on end devices without connecting to the cloud. This is especially true for TinyML applications that tend to focus on battery-powered sensors, IoT devices, etc.
Next-generation neuromorphic systems are expected to have intrinsic capabilities to learn or handle complex data just as our brains do. It has the potential to process large amounts of digital information with much lower power consumption than conventional processors.
In the medium term, traditional hybrid computers with neuromorphic chips could greatly improve performance compared to conventional machines. In the long term, fully neuromorphic computers will be fundamentally different and designed for specific applications, from natural language processing to autonomous driving.
As far as design goes, instead of the conventional architecture of splitting chips into processor and memory, the computer can be built with silicon “neurons” that perform both functions.
Creating extensive “many-to-many” neural connectivity will enable efficient pipeline for signal interaction and facilitate massively parallel operation. There is a tendency to develop increasing numbers of electronic neurons, synapses, etc. on a single chip.
Neuromorphic processor chip design approaches broadly follow one of several different paths. The ASIC-based digital neuromorphic chip delivers highly optimized computing performance tailored to application requirements. For AI applications, it can potentially perform both inference and real-time learning.
The FPGA-based chip is similar to the ASIC-based digital design but also offers portability and reconfigurability. Due to its highly reconfigurable nature and parallel speed, FPGA is considered a suitable platform to mimic, to some extent, the natural plasticity of biological neural networks.
Analog neuromorphic chips, which include so-called “in-memory computing”, have the potential to achieve the lowest power consumption. They would be primarily suitable for machine learning inference rather than real-time learning.
The photonic integrated circuit (PIC)-based neuromorphic chip offers photonic computing that can achieve very high speed with very low power consumption, while the mixed-signal NSoC (neuromorphic system on chip) design combines extremely high-power analog design. low for ML inference with digital. SNN architecture processor for real-time learning.
I expect neuromorphic computing to generate development opportunities in various technological areas, such as materials, devices, neuromorphic circuits, and new neuromorphic algorithms and software development platforms, all crucial elements for the success of neuromorphic computing.
There are countless potential applications. The application of neuromorphic techniques to vision applications represents a large market opportunity for many different sectors, including smart vision sensors and gesture control applications in smart homes, offices and factories.
Another use case is neuromorphic computing for the control of myoelectric prostheses. Myoelectric prostheses help people with reduced mobility by detecting and processing muscle spikes. However, inefficiencies must be improved to improve the user experience, such as increasing the granularity of motion classification and reducing computational resources to decrease energy consumption.
Low-power edge computing represents a key area of high commercial potential. As IoT applications proliferate in homes, offices, industries, and smart cities, there is a growing need for more intelligence at the edge as control moves from data centers to on-premises devices. Applications such as autonomous robots, wearable health systems, security, and IoT share the common characteristics of stand-alone, ultra-low-power, and battery-powered operation.
One potential application that I find particularly fascinating is that of “Parametric Insurance.” As global attention increasingly focuses on climate-related issues, this unconventional form of “disaster insurance” is playing an increasingly important role. It is a product that offers pre-specified payments based on a triggering event and can help provide protection when standard policies are harder to obtain.
To me, the correlation with neuromorphic computing is clear. Parametric insurance can be linked to a catastrophe bond (CAT) for events such as hurricanes, earthquakes, etc. Edge computing with neuromorphic technology has an important role to play as it would enable very granular and sophisticated risk analysis, adjudication and payment settlement. Everything would be at its limit, with the consequent low cost.
About the Author
Dr Aidong Xu, Head of Semiconductor Capability, Cambridge Consultants
Aidong has more than 30 years of experience in various industries, including some of the leading semiconductor companies. He has led large, internationally based engineering teams and introduced innovative, industry-leading products to the global market that have achieved rapid and sustained business growth. Aidong has a Ph.D. in Power Electronics and Power Semiconductors.