NVIDIA A100
E790764
The NVIDIA A100 is a high-performance data center GPU designed for AI, high-performance computing, and data analytics workloads, featuring advanced Tensor Core acceleration.
Statements (50)
| Predicate | Object |
|---|---|
| instanceOf |
data center GPU
ⓘ
graphics processing unit ⓘ |
| architecture | Ampere NERFINISHED ⓘ |
| codename | GA100 NERFINISHED ⓘ |
| family | NVIDIA data center GPUs NERFINISHED ⓘ |
| hasFeature |
HBM2e memory
ⓘ
Multi-Instance GPU NERFINISHED ⓘ NVLink support ⓘ PCI Express 4.0 support ⓘ second-generation RT Cores ⓘ third-generation Tensor Cores ⓘ |
| launchDate | May 2020 ⓘ |
| manufacturer | NVIDIA NERFINISHED ⓘ |
| memoryBusWidth | 5120-bit ⓘ |
| memoryCapacityVariant |
40 GB
ⓘ
80 GB ⓘ |
| memoryType | HBM2e ⓘ |
| processNode | TSMC 7 nm NERFINISHED ⓘ |
| successorOf | NVIDIA V100 NERFINISHED ⓘ |
| supports |
Tensor Cores
NERFINISHED
ⓘ
deep learning inference ⓘ deep learning training ⓘ mixed-precision computing ⓘ |
| supportsInterface |
HGX A100 platform
NERFINISHED
ⓘ
PCIe 4.0 ⓘ SXMe form factor NERFINISHED ⓘ |
| supportsPrecision |
FP16
ⓘ
FP32 ⓘ FP64 ⓘ INT4 ⓘ INT8 ⓘ TF32 ⓘ |
| supportsTechnology |
CUDA
NERFINISHED
ⓘ
NVLink NERFINISHED ⓘ NVSwitch NERFINISHED ⓘ TensorRT NERFINISHED ⓘ cuDNN NERFINISHED ⓘ |
| targetMarket | data centers ⓘ |
| targetWorkloads |
artificial intelligence
ⓘ
data analytics ⓘ high-performance computing ⓘ |
| transistorCount | 54000000000 ⓘ |
| useCase |
cloud AI services
ⓘ
enterprise AI training ⓘ large-scale recommendation systems ⓘ natural language processing ⓘ scientific simulations ⓘ supercomputing clusters ⓘ |
| usedIn |
NVIDIA DGX A100 system
NERFINISHED
ⓘ
NVIDIA HGX A100 platform NERFINISHED ⓘ |
Referenced by (3)
Full triples — surface form annotated when it differs from this entity's canonical label.