Home Electronics Neural processor IP tackles generative AI workloads

Neural processor IP tackles generative AI workloads

0
Neural processor IP tackles generative AI workloads

[ad_1]

Ceva has enhanced its NeuPro-M NPU IP family to bring the power of generative AI to infrastructure, industrial, automotive, consumer, and mobile markets. The redesigned NeuPro-M architecture and development tools support transformer networks, convolutional neural networks (CNNs), and other neural networks. NeuPro-M also integrates a vector processing unit to support any future neural network layer.

The power-efficient NeuPro-M NPU IP delivers peak performance of 350 tera operations per second per watt (TOPS/W) on a 3-nm process node. It is also capable of processing more than 1.5 million tokens per second per watt for transformer-based large language model (LLM) inferencing.

To enable scalability for diverse AI markets, NeuPro-M adds two new NPU cores: the NPM12 and NPM14 with two and four NeuPro-M engines, respectively. These two cores join the existing NPM11 and NPM18 with one and eight engines, respectively. Processing options range from 32 TOPS for a single-engine NPU core to 256 TOPS for an eight-engine NPU core.

NeuPro-M meets stringent safety and quality compliance standards, such as ISO 26262 ASIL-B and Automotive Spice. Development software for NeuPro-M includes the Ceva Deep Neural Network (CDNN) AI compiler, system architecture planner tool, and neural network training optimizer tool.

The NPM11 NPU IP is generally available now, while the NPM12, NPM14, and NPM18 are available to lead customers.

NeuPro-M product page

Ceva

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here