Skip to main content
🔧

ONNX Runtime

Cross-platform ML inference engine supporting ONNX models on diverse hardware

AI Infrastructure
ONNX Runtime logo

ONNX Runtime

Cross-platform ML inference engine supporting ONNX models on diverse hardware

ONNX Runtime is an open-source cross-platform inference and training acceleration library from Microsoft that runs models in the Open Neural Network Exchange (ONNX) format on CPUs, GPUs, and specialized AI accelerators. It includes hardware-specific execution providers that optimize performance for Intel, NVIDIA, AMD, ARM, and other processors. Developers building ML applications across platforms use ONNX Runtime to deploy trained models from PyTorch, TensorFlow, or scikit-learn in production environments including web, mobile, cloud, and edge without framework-specific dependencies.

Key Features

  • Cross-platform
  • Hardware acceleration
  • ONNX format
  • Multiple execution providers
  • Edge and cloud
#inference#onnx#microsoft#cross-platform#open-source

Get Started

Visit ONNX Runtime
🟢
Free
Completely free to use

Quick Info

Category
AI Infrastructure
Pricing
Free

More AI Infrastructure Tools