ONNX Runtime

E814638

ONNX Runtime is a high-performance, cross-platform inference engine for running machine learning models in the Open Neural Network Exchange (ONNX) format across a variety of hardware and deployment environments.

Try in SPARQL Jump to: Statements Referenced by

Statements (58)

Predicate Object
instanceOf deep learning runtime
machine learning inference engine
open-source software
developer Microsoft
feature ONNX model compatibility
cross-platform support
graph optimizations
hardware acceleration
high-performance inference
model optimization
quantization support
training support
license MIT License
partOf ONNX ecosystem NERFINISHED
programmingLanguage C
C# NERFINISHED
C++
Java
JavaScript
Objective-C NERFINISHED
Python
Swift NERFINISHED
relatedTo ONNX NERFINISHED
repository https://github.com/microsoft/onnxruntime
supportsExecutionProvider CPUExecutionProvider GENERATED
CUDA GENERATED
CoreML GENERATED
DirectML GENERATED
DmlExecutionProvider GENERATED
OpenVINO GENERATED
ROCm GENERATED
TensorRT GENERATED
supportsFormat ONNX NERFINISHED
supportsHardware CPU
FPGA
GPU
NPU
VPU NERFINISHED
supportsLanguageBinding C API NERFINISHED
C# API
Java API NERFINISHED
JavaScript API NERFINISHED
Objective-C API NERFINISHED
Python API NERFINISHED
Swift API NERFINISHED
supportsPlatform Android
Azure NERFINISHED
Edge devices
Linux
Web
Windows
iOS
macOS
useCase cloud deployment
edge deployment
on-device AI
production inference
website https://onnxruntime.ai

Referenced by (2)

Full triples — surface form annotated when it differs from this entity's canonical label.