Optimize and Accelerate Machine Learning Inferencing and Training

Speed up machine learning process

Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training

Plug into your existing technology stack

Support for a variety of frameworks, operating systems and hardware platforms

Build using proven technology

Used in Office 365, Visual Studio and Bing, delivering over 20 billion inferences every day

Get Started Easily

OS

OS list contains five items

Windows
Linux
Mac
Android (Preview)
iOS (Preview)

API

API list contains seven items

Python (3.6-3.9)
C++
C#
C
Java
Javascript (Node.js)
WinRT

Architecture

Architecture list contains four items

X64
X86
ARM64
ARM32

Hardware Acceleration

Hardware Acceleration list contains fourteen items

Default  CPU
ACL (Preview)
ArmNN (Preview)
CUDA
DirectML
DNNL
MIGraphX (Preview)
NNAPI (Preview)
NUPHAR (Preview)
OpenVINO
Rockchip NPU (Preview)
TensorRT
Vitis AI (Preview)

Installation Instructions

Please select a combination of resources.

“Using a common model and code base, the ONNX Runtime allows Peakspeed to easily flip between platforms to help our customers choose the most cost-effective solution based on their infrastructure and requirements.”

– Oscar Kramer, Chief Geospatial Scientist, Peakspeed

“The ONNX Runtime API for Java enables Java developers and Oracle customers to seamlessly consume and execute ONNX machine-learning models, while taking advantage of the expressive power, high performance, and scalability of Java.”

– Stephen Green, Director of Machine Learning Research Group, Oracle

“We use ONNX Runtime to accelerate model training for a 300M+ parameters model that powers code autocompletion in Visual Studio IntelliCode.”

– Neel Sundaresan, Director SW Engineering, Data & AI, Developer Division, Microsoft

“ONNX Runtime has vastly increased Vespa.ai’s capacity for evaluating large models, both in performance and model types we support.”

– Lester Solbakken, Principal Engineer, Vespa.ai, Verizon Media

News & Announcements​​

SKL and ORT logos

Accelerate and simplify Scikit-learn model inference with ONNX Runtime

Vespa logo

ONNX Runtime scenario highlight: Vespa.ai integration

ORT Mobile diagram

Introducing ONNX Runtime mobile – a reduced size, high performance package for edge devices

Resources​​

NVIDIA logo

“ONNX Runtime enables our customers to easily apply NVIDIA TensorRT’s powerful optimizations to machine learning models, irrespective of the training framework, and deploy across NVIDIA GPUs and edge devices.”

– Kari Ann Briski, Sr. Director, Accelerated Computing Software and AI Product, NVIDIA

Intel logo

“We are excited to support ONNX Runtime on the Intel® Distribution of OpenVINO™. This accelerates machine learning inference across Intel hardware and gives developers the flexibility to choose the combination of Intel hardware that best meets their needs from CPU to VPU or FPGA.”

– Jonathan Ballon, Vice President and General Manager, Intel Internet of Things Group

Rockhip logo

“With support for ONNX Runtime, our customers and developers can cross the boundaries of the model training framework, easily deploy ML models in Rockchip NPU powered devices.”

– Feng Chen, Senior Vice President, Rockchip

xilinx logo

“Xilinx is excited that Microsoft has announced Vitis™ AI interoperability and runtime support for ONNX Runtime, enabling developers to deploy machine learning models for inference to FPGA IaaS such as Azure NP series VMs and Xilinx edge devices.”

– Sudip Nag, Corporate Vice President, Software & AI Products, at Xilinx