Accelerate Edge AI Innovation
AI data-processing workloads at the edge are already transforming use cases and user experiences. The third-generation Ethos NPU helps meet the needs of future edge AI use cases.
The Ethos-U85 offers support for transformer-based models at the edge, the foundation for newer language and vision models, scales from 128 to 2048 MAC units, and is 20% more energy efficient than Arm Ethos-U55 and Arm Ethos-U65, enabling higher performance edge AI use cases in a sustainable way. Offering the same toolchain as previous Ethos-U generations, partners can benefit from seamless migration and leverage investments in Arm-based machine learning (ML) tools.
Features and Benefits
20% more energy efficient than Ethos-U55 and Ethos-U65, enabling future use cases in a sustainable way.
Scales from 128 to 2048 MACs, providing up to 4 TOPs of performance at 1 GHz.
Native support for transformer networks, along with support for Tensor Operator Set Architecture (TOSA) as a standard.
Supported by Arm Corstone-320, a reference design with a unified toolchain and the extensive Ethos-U ecosystem.
Specifications
The Arm Ethos-U85 is the highest performance implementation of the Arm Ethos-U NPU. It enables enhanced edge AI processing with support for transformer-based models and delivers 20% more energy efficiency than previous generations of Ethos. The key characteristics of Ethos-U85 include:
- Scalable performance – 256 GOP/s up to 4 TOPs at 1 GHz
- Scales from 128 to 2048 MACs
- Further reduced energy consumption – 20% lower than previous Ethos-U NPUs
- Native support for transformer-based networks
Ethos-U85 targets numerous different applications with use in high-performance Arm Cortex-A or low-power embedded devices based on Arm Cortex-M.
Visit Arm Developer for more details
Key Documentation
Compare all Ethos-U processors: download comparison PDF
Where Innovation and Ideas Come to Life
Artificial Intelligence
AI and ML are expanding and defining more applications than ever before, changing how we interact with devices and machines everywhere. Arm processor IP is scalable and flexible, so it can run any type of ML workload, today or in the future.
Industrial IoT
Ethos-U85 can be deployed into MCU and MPU systems to accelerate embedded ML tasks and run new AI workloads on higher performance IoT systems, such as high-speed motor controllers and robotic controllers.
Smart Homes
Increasingly advanced levels of interaction with devices in our homes requires increased compute performance from processors. Ethos-U85 delivers new levels of intelligence in the smart home including voice and vision applications.
Talk with an Expert
As AI devices become more pervasive than ever, the Ethos-U85 can help create a better user experience for your products. Discover how by talking to an Arm expert today.
Related Products
Corstone-320
Speeds time-to-market for edge AI solutions with software and an example subsystem. It integrates the Arm Cortex-M85, Arm DMA-350, Arm Mali-C55 and the Arm Ethos-U85 with support for transformer networks.
Cortex-M85
Cortex-M85 provides increased security and high performance on a single Cortex-M without the need to migrate to multicore or heterogeneous platforms.
Cortex-M55
The Arm Cortex-M55 processor with AI capabilities is the first Cortex-M processor to feature Arm Helium technology, enabling a significant uplift in power-efficient ML and DSP performance for IoT devices.
Software and Tools
Arm’s comprehensive suite of integrated software and tools make it easier and quicker to design, develop, and maintain AI-based IoT and automotive applications.
A Foundation of Silicon Success
Arm Compute Library
This software library is a collection of optimized low-level functions for Arm Cortex-A CPUs and Arm Mali GPUs that target popular image processing, computer vision, and ML. It offers significant performance uplift over OSS alternatives and is available free of charge under a permissive MIT open source license.
CMSIS-NN
Common Microcontroller Software Interface Standard – Efficient Neural Network Implementation (CMSIS-NN) is a collection of efficient neural network kernels developed to maximize the performance and minimize the memory footprint of neural networks on Cortex-M processor cores.
Arm NN
Arm NN bridges the gap between existing NN frameworks and the underlying IP. It supports the efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently – without modification – across Arm Cortex-A CPUs, Arm Mali GPUs and the Ethos-N NPUs.