TPUs (via XLA integrations)
E96636
TPUs (via XLA integrations) are Google's specialized tensor processing units that can be used as accelerators for PyTorch models through the XLA compilation framework.
Observed surface forms (3)
| Surface form | Occurrences |
|---|---|
| PyTorch/XLA runtime | 1 |
| TPUs | 1 |
| XLA (Accelerated Linear Algebra) | 1 |
Statements (47)
| Predicate | Object |
|---|---|
| instanceOf |
PyTorch accelerator backend
ⓘ
XLA-based compilation target ⓘ hardware accelerator integration ⓘ |
| abstracts | low-level TPU device management ⓘ |
| aimsTo |
accelerate deep learning workloads
ⓘ
reduce training time for large models ⓘ |
| benefits | users needing scalable training on Google Cloud TPUs ⓘ |
| category |
hardware-accelerated deep learning backend
ⓘ
machine learning infrastructure ⓘ |
| compatibleWith |
Google Cloud TPU V2
ⓘ
Google Cloud TPU V3 ⓘ Google Cloud TPU V4 ⓘ |
| designedFor |
high-throughput tensor operations
ⓘ
large batch training ⓘ |
| developedBy | Google ⓘ |
| documentationHostedAt | https://github.com/pytorch/xla ⓘ |
| enables |
accelerated tensor computations
ⓘ
execution of PyTorch models on TPUs ⓘ graph compilation via XLA ⓘ |
| exposes |
XLA-specific debugging tools
ⓘ
profiling utilities for TPU workloads ⓘ |
| handles | automatic differentiation on TPU via XLA graphs ⓘ |
| integratesWith | PyTorch autograd system via XLA ⓘ |
| mapsTo | TPU cores as PyTorch devices ⓘ |
| optimizationMethod |
ahead-of-time compilation
ⓘ
graph-level optimization ⓘ operation fusion ⓘ |
| partOf |
XLA
ⓘ
surface form:
PyTorch/XLA project ecosystem
|
| provides |
PyTorch-like APIs for TPU execution
ⓘ
device placement utilities ⓘ distributed data loader support ⓘ |
| requires |
TPUs (via XLA integrations)
self-linksurface differs
ⓘ
surface form:
PyTorch/XLA runtime
XLA compiler ⓘ XLA-compatible PyTorch operations ⓘ specialized input pipelines for TPUs ⓘ |
| supports |
data parallel training
ⓘ
distributed training ⓘ mixed precision training ⓘ model parallel training ⓘ synchronous data parallelism across TPU cores ⓘ |
| supportsFramework |
PyTorch
ⓘ
PyTorch ⓘ
surface form:
PyTorch/XLA
|
| targetHardware |
Tensor Processing Unit
ⓘ
surface form:
Google TPU
|
| usedFor |
inference of deep learning models
ⓘ
training neural networks ⓘ |
| usedIn | large-scale machine learning experiments ⓘ |
| usesFramework | XLA ⓘ |
Referenced by (4)
Full triples — surface form annotated when it differs from this entity's canonical label.
this entity surface form:
TPUs
this entity surface form:
XLA (Accelerated Linear Algebra)
this entity surface form:
PyTorch/XLA runtime