TPUs (via XLA integrations)

E96636

TPUs (via XLA integrations) are Google's specialized tensor processing units that can be used as accelerators for PyTorch models through the XLA compilation framework.

Jump to: Surface forms Statements Referenced by

Observed surface forms (3)


Statements (47)

Predicate Object
instanceOf PyTorch accelerator backend
XLA-based compilation target
hardware accelerator integration
abstracts low-level TPU device management
aimsTo accelerate deep learning workloads
reduce training time for large models
benefits users needing scalable training on Google Cloud TPUs
category hardware-accelerated deep learning backend
machine learning infrastructure
compatibleWith Google Cloud TPU V2
Google Cloud TPU V3
Google Cloud TPU V4
designedFor high-throughput tensor operations
large batch training
developedBy Google
documentationHostedAt https://github.com/pytorch/xla
enables accelerated tensor computations
execution of PyTorch models on TPUs
graph compilation via XLA
exposes XLA-specific debugging tools
profiling utilities for TPU workloads
handles automatic differentiation on TPU via XLA graphs
integratesWith PyTorch autograd system via XLA
mapsTo TPU cores as PyTorch devices
optimizationMethod ahead-of-time compilation
graph-level optimization
operation fusion
partOf XLA
surface form: PyTorch/XLA project ecosystem
provides PyTorch-like APIs for TPU execution
device placement utilities
distributed data loader support
requires TPUs (via XLA integrations) self-linksurface differs
surface form: PyTorch/XLA runtime

XLA compiler
XLA-compatible PyTorch operations
specialized input pipelines for TPUs
supports data parallel training
distributed training
mixed precision training
model parallel training
synchronous data parallelism across TPU cores
supportsFramework PyTorch
PyTorch
surface form: PyTorch/XLA
targetHardware Tensor Processing Unit
surface form: Google TPU
usedFor inference of deep learning models
training neural networks
usedIn large-scale machine learning experiments
usesFramework XLA

Referenced by (4)

Full triples — surface form annotated when it differs from this entity's canonical label.

AlphaZero hardwareUsed TPUs (via XLA integrations)
this entity surface form: TPUs
TPU programmedWith TPUs (via XLA integrations)
this entity surface form: XLA (Accelerated Linear Algebra)
TPUs (via XLA integrations) requires TPUs (via XLA integrations) self-linksurface differs
this entity surface form: PyTorch/XLA runtime
PyTorch supportsHardware TPUs (via XLA integrations)