Today, the neuromorphic approach still occupies the ‘curio cabinet’. “Many are prophesying the advent of neuromorphic approaches in the same way deep learning techniques were wrongfully dismissed – until they ended up reigning,” explained Pierre Cambou, Principal Analyst, Imaging at Yole Développement (Yole).
Cambou added: “Many similarities point to the idea that such a paradigm shift could happen quickly.”
Several years ago, the biggest obstacle preventing the DNN approach from performing its best was the lack of suitable hardware to support DNN’s innovative software advances. Today, the same is true for neuromorphic technology – but as the first SNN chips roll out, the first beachhead markets are ready to fuel growth. The initial markets are industrial and mobile, mainly for robotic revolution and real-time perception. Within the next decade, the availability of hybrid in-memory computing chips should unlock the automotive market, which is desperate for a mass-market AD technology.
Neuromorphic sensing and computing could be the magic bullet for these markets, solving most of AI’s current issues while opening new perspectives in the decades to come.
Yole explores today the computing and deep learning world with an imaging focus. The new report, Neuromorphic Sensing & Computing delivers an in-depth understanding of the neuromorphic landscape with key technology trends, competitive landscape, market dynamics and segmentation per application. It presents key technical insights and analysis regarding future technology trends and challenges.
This analysis is at the cross road of two industries covered by Yole’s analysts: imaging and software & computing. How will this industry evolve? Who are the companies to watch? What is the status of their development?
Since 2012, deep learning techniques have proven their superiority in the AI space. These techniques have spurred a giant leap in performance, and have been widely adopted by the industry.
“Recently, we have witnessed a race for development of new chips specialized for deep-learning training and inference, either for high-performance computing, servers, or edge applications,” said Yohann Tschudi, Technology & Market Analyst, Computing & Software at Yole. “These chips use the existing semiconductor paradigm based on Moore’s Law. And while it is technically possible to manufacture chips capable of performing hundreds of Tops to serve today’s AI application space, the desired computing power is still well below expectations.”
Consequently, an arms race is ongoing, centering on the use of ‘’brute force computing” to address computing power requirements. The technology node currently used is already at 7nm, and full wafer chips have emerged. Room for improvement appears small, and relying solely on the Moore’s Law paradigm is creating several uncertainties.
Current deep-learning techniques and associated hardware face three main hurdles: first, the economics of Moore’s Law make it very difficult for a start-up to compete in the AI space and therefore is limiting competition. Second, data overflow makes current memory technologies a limiting factor. And third, the exponential increase in computing power requirements has created a ‘heat wall’ for each application.
Meanwhile, the market is demanding more performance for real-time speech recognition and translation, real-time video understanding, and real-time perception for robots and cars, and there are hundreds of other applications asking for more intelligence that combines sensing and computing.
Given these significant hurdles, the time is ripe for disruption: a new technology paradigm in which start-ups can differentiate themselves, and which could utilize the benefits derived from emerging memory technologies and drastically improve data, bandwidth, and power efficiencies.
Many foresee this new paradigm to be the neuromorphic approach, some would call it the event-based approach where computation happens only if needed instead of being done at each clock step. This method allows a tremendous energy saving essential to run these greedy and intensive AI algorithms. This is the probable next step in AI technology.