OpenVINO
E813078
OpenVINO is an open-source toolkit from Intel for optimizing and deploying deep learning inference across a range of hardware platforms, especially Intel CPUs, integrated GPUs, VPUs, and FPGAs.
Statements (88)
| Predicate | Object |
|---|---|
| instanceOf |
Intel software product
ⓘ
deep learning inference toolkit ⓘ open-source software ⓘ |
| category |
artificial intelligence framework
ⓘ
edge AI toolkit ⓘ machine learning library ⓘ |
| deploymentScenario |
cloud inference
ⓘ
edge computing ⓘ on-premise servers ⓘ |
| developer | Intel NERFINISHED ⓘ |
| feature |
FP16 optimization
ⓘ
INT8 quantization ⓘ automatic device selection ⓘ benchmarking tools ⓘ heterogeneous execution ⓘ model compression ⓘ model optimization ⓘ multi-device inference ⓘ post-training quantization ⓘ pre-trained model zoo ⓘ runtime inference engine ⓘ |
| includesComponent |
Developer tools and samples
ⓘ
Model Optimizer (legacy component) NERFINISHED ⓘ Open Model Zoo NERFINISHED ⓘ OpenVINO Model Server NERFINISHED ⓘ OpenVINO Runtime NERFINISHED ⓘ |
| initialReleaseBy | Intel NERFINISHED ⓘ |
| license | Apache License 2.0 ⓘ |
| optimizedFor | Intel hardware ⓘ |
| origin | Intel Computer Vision SDK NERFINISHED ⓘ |
| primaryUse |
computer vision inference
ⓘ
deep learning inference optimization ⓘ deployment of AI models to edge devices ⓘ speech and NLP inference ⓘ |
| programmingLanguage |
C++
ⓘ
Python ⓘ |
| repository | https://github.com/openvinotoolkit/openvino NERFINISHED ⓘ |
| supportsAccelerationTechnique |
asynchronous inference
GENERATED
ⓘ
constant folding GENERATED ⓘ graph-level optimizations GENERATED ⓘ layer-wise optimization GENERATED ⓘ operator fusion GENERATED ⓘ |
| supportsAPI |
C API
ⓘ
C++ API ⓘ Python API NERFINISHED ⓘ REST API via OpenVINO Model Server ⓘ |
| supportsDomain |
audio processing
ⓘ
computer vision ⓘ healthcare AI ⓘ industrial IoT ⓘ natural language processing ⓘ smart retail ⓘ |
| supportsFramework |
Caffe
NERFINISHED
ⓘ
MXNet NERFINISHED ⓘ ONNX NERFINISHED ⓘ PyTorch NERFINISHED ⓘ TensorFlow NERFINISHED ⓘ |
| supportsHardware |
Intel CPU
ⓘ
Intel FPGA NERFINISHED ⓘ Intel Habana Gaudi (via ONNX Runtime integration) NERFINISHED ⓘ Intel Movidius Myriad NERFINISHED ⓘ Intel Neural Compute Stick 2 NERFINISHED ⓘ Intel VPU NERFINISHED ⓘ Intel discrete GPU ⓘ Intel integrated GPU NERFINISHED ⓘ |
| supportsLanguageBinding |
.NET
NERFINISHED
ⓘ
C NERFINISHED ⓘ C++ NERFINISHED ⓘ Go NERFINISHED ⓘ Java NERFINISHED ⓘ Node.js NERFINISHED ⓘ Python NERFINISHED ⓘ Rust NERFINISHED ⓘ |
| supportsModelFormat |
ONNX
NERFINISHED
ⓘ
OpenVINO IR NERFINISHED ⓘ PaddlePaddle NERFINISHED ⓘ TensorFlow Frozen Graph NERFINISHED ⓘ TensorFlow SavedModel NERFINISHED ⓘ |
| supportsOS |
Linux
NERFINISHED
ⓘ
Windows NERFINISHED ⓘ macOS (limited, CPU-only) NERFINISHED ⓘ |
| supportsPrecision |
BF16
ⓘ
FP16 ⓘ FP32 ⓘ INT8 ⓘ UINT8 ⓘ |
| website |
https://docs.openvino.ai
ⓘ
https://www.openvino.ai ⓘ |
Referenced by (1)
Full triples — surface form annotated when it differs from this entity's canonical label.