French startup launches neural processor for AI inference
Neurxcore has announced the launch of its Neural Processor Unit (NPU) product line, aimed at AI inference applications.
This new product line is based on an enhanced version of NVIDIA’s Deep Learning Accelerator (Open NVDLA) technology, supplemented with Neurxcore's own patented architectures.
The SNVDLA IP series from Neurxcore emphasises energy efficiency, performance, and capability, focusing primarily on image processing tasks such as classification and object detection. It also offers versatility for generative AI applications. The series has been validated on a 22nm TSMC platform and demonstrated on a board showcasing a variety of applications.
Alongside the hardware, Neurxcore has developed the Heracium SDK (Software Development Kit). Built upon the open-source Apache TVM (Tensor-Virtual Machine) framework, this SDK allows users to configure, optimise, and compile neural network applications for the SNVDLA products.
The NPU product line serves a broad spectrum of industries and applications. It is adaptable for use in ultra-low power to high-performance scenarios, including IoT, wearables, smartphones, smart homes, surveillance, Set-Top Box and Digital TV (STB/DTV), smart TVs, robotics, Edge computing, AR/VR, ADAS, servers, and more.
In addition to its NPU product line, Neurxcore offers a complete package for developing customised NPU solutions. This includes new operators, AI-optimised subsystem design, and model development, encompassing training and quantisation.
Virgile Javerliac, Founder and CEO of Neurxcore, stated: “80% of AI computational tasks involve inference. Achieving energy and cost reduction while maintaining performance is crucial.” He also acknowledged the team behind the product and reaffirmed Neurxcore’s commitment to customer service and collaborative opportunities.
The inference stage in AI, crucial for making predictions or generating content, is efficiently addressed by Neurxcore's solutions. These solutions are suitable for various applications, even when serving multiple users simultaneously.
The SNVDLA product line shows significant improvements in energy efficiency, performance, and features over the original NVIDIA version, while still benefiting from NVIDIA's industrial-grade development. The product line's tunable capabilities, such as the number of cores and multiply-accumulate (MAC) operations per core, make it adaptable for diverse markets. It is distinguished by its energy and cost efficiency.
According to Gartner’s 2023 AI Semiconductors report, the AI semiconductor market is projected to reach $111.6 billion by 2027, with a five-year CAGR of 20%. This growth is attributed to the increasing use of AI techniques in data centres, Edge computing, and endpoint devices, necessitating optimized semiconductor devices.
Neurxcore's launch of its NPU product line marks a significant advancement in AI inference technology. With its enhanced efficiency, performance, and broad market applicability, the SNVDLA IP series is poised to make a substantial impact in the burgeoning field of AI semiconductors.