TOPIC 1

Build AI/ML Apps

Optimize ML inference and training performance on AWS, as well as best practices for ML inference using PyTorch 2.0, and more.

 

Best Practices to Optimize ML Performance on AWS Graviton

Optimizing Inference Performance with PyTorch 2.0

  • Example tutorial showcasing how to achieve the best inference performance with bfloat16 kernels, and the right back-end selection.

Docker Images for TensorFlow and PyTorch on Arm

  • Learn how to build and use Docker images for TensorFlow and PyTorch for Arm.
TOPIC 2

Build GenAI Apps

Learn the capabilities of Arm Neoverse CPUs running LLMs and SLMs, and accelerate Hugging Face (HF) models on Arm.

 

LLM Performance on Arm Neoverse

  • Learn about the capabilities of Arm Neoverse v1-based AWS Graviton3 CPUs in running LLMs, showcasing the key advantages compared to other CPU-based server platforms.

LLM Chatbox on Arm

  • Step into the world of Generative AI with this LLM chatbot learning path. Discover how you can run an LLM chatbot on Arm-based servers using llama.cpp.

Small Language Models (SLMs)

  • Overview of the usability of SLMs in a more efficient and sustainable way, requiring fewer resources, and being easier to customize and control compared to LLMs.

Accelerate HF Models using Arm Neoverse

  • Learn about the key features in Arm Neoverse CPUs for ML, with a Sentiment Analysis use case.

Accelerate and Deploy NLP Models from HF

TOPIC 3

Accelerate GenAI, AI, and ML

Accelerate your AI/ML framework, tools, and cloud services with open-source Arm libraries and optimized Arm SIMD code.

 

Accelerating PyTorch 2.0 Inference with AWS

  • A collaboration between AWS, Arm, and Meta to optimize the performance of PyTorch 2.0 inference for Arm-based processors, increasing performance up to 3.5 times compared to the previous PyTorch release, and more.

Arm Compute Library (ACL)

  • ACL is an open-source fully featured library, with a recollection of low-level ML functions optimized for Arm Neoverse and other Arm architectures.

Arm Kleidi

  • Arm Kleidi open-source libraries are a lighter weight performance library (compared to ACL) for accelerating AI and ML workloads and frameworks.

Arm KleidiCV

  • The Arm KleidiCV library is designed for image processing and integrates into any CV framework to enable best performance for CV workloads on Arm.

Arm SIMD code

  • Optimize your AI/ML workloads with Arm SIMD code, either in assembly or using Arm Intrinsics in C/C++, to leverage huge performance gains.

Join the Arm Developer Program

 

Arm Developer Program logo

 

Join the Arm Developer Program to build your future on Arm. Get fresh insights directly from Arm experts,
connect with like-minded peers for advice, or build on your expertise and become an Arm Ambassador.

Join Now
Arm Developer Program

Community Support

Learn from the Community

Talk directly to Arm expert, Zach Lasiuk, and the broader Arm community in the servers and cloud computing today.

Zach Lasiuk

Zach helps software devs do their best work on Arm, specializing in cloud migration and GenAI apps. He is an XTC judge in Deep Tech and AI Ethics.

Tell Us What We Are Missing

Think we are missing some resources? Have some examples to share from your experience? Let us know directly via the link below.