1. Introduction
have a mannequin. You’ve gotten a single GPU. Coaching takes 72 hours. You requisition a second machine with 4 extra GPUs — and now you want your code to really use them. That is the precise second the place most practitioners hit a wall. Not as a result of distributed coaching is conceptually arduous, however as a result of the engineering required to do it appropriately — course of teams, rank-aware logging, sampler seeding, checkpoint boundaries — is scattered throughout dozens of tutorials that every cowl one piece of the puzzle.
This text is the information I want I had once I first scaled coaching past a single node. We are going to construct an entire, production-grade multi-node coaching pipeline from scratch utilizing PyTorch’s DistributedDataParallel (DDP). Each file is modular, each worth is configurable, and each distributed idea is made express. By the tip, you should have a codebase you possibly can drop into any cluster and begin coaching instantly.
What we’ll cowl: the psychological mannequin behind DDP, a clear modular venture construction, distributed lifecycle administration, environment friendly information loading throughout ranks, a coaching loop with combined precision and gradient accumulation, rank-aware logging and checkpointing, multi-node launch scripts, and the efficiency pitfalls that journey up even skilled engineers.
The total codebase is accessible on GitHub. Each code block on this article is pulled immediately from that repository.
2. How DDP Works — The Psychological Mannequin
Earlier than writing any code, we want a transparent psychological mannequin. DistributedDataParallel (DDP) will not be magic — it’s a well-defined communication sample constructed on prime of collective operations.
The setup is simple. You launch N processes (one per GPU, doubtlessly throughout a number of machines). Every course of initialises a course of group — a communication channel backed by NCCL (NVIDIA Collective Communications Library) for GPU-to-GPU transfers. Each course of will get three identification numbers: its world rank (distinctive throughout all machines), its native rank (distinctive inside its machine), and the overall world dimension.
Every course of holds an equivalent copy of the mannequin. Information is partitioned throughout processes utilizing a DistributedSampler — each rank sees a unique slice of the dataset, however the mannequin weights begin (and keep) equivalent.
The crucial mechanism is what occurs throughout backward(). DDP registers hooks on each parameter. When a gradient is computed for a parameter, DDP buckets it with close by gradients and fires an all-reduce operation throughout the method group. This all-reduce computes the imply gradient throughout all ranks. As a result of each rank now has the identical averaged gradient, the following optimizer step produces equivalent weight updates, holding all replicas in sync — with none express synchronisation code from us.
For this reason DDP is strictly superior to the older DataParallel: there is no such thing as a single “grasp” GPU bottleneck, no redundant ahead passes, and gradient communication overlaps with backward computation.

Key terminology
| Time period | Which means |
| Rank | Globally distinctive course of ID (0 to world_size – 1) |
| Native Rank | GPU index inside a single machine (0 to nproc_per_node – 1) |
| World Measurement | Whole variety of processes throughout all nodes |
| Course of Group | Communication channel (NCCL) connecting all ranks |
3. Structure Overview
A manufacturing coaching pipeline ought to by no means be a single monolithic script. Ours is cut up into six targeted modules, every with a single accountability. The dependency graph beneath reveals how they join — word that config.py sits on the backside, appearing as the only supply of fact for each hyperparameter.

Right here is the venture construction:
pytorch-multinode-ddp/
├── practice.py # Entry level — coaching loop
├── config.py # Dataclass configuration + argparse
├── ddp_utils.py # Distributed setup, teardown, checkpointing
├── mannequin.py # MiniResNet (light-weight ResNet variant)
├── dataset.py # Artificial dataset + DistributedSampler loader
├── utils/
│ ├── logger.py # Rank-aware structured logging
│ └── metrics.py # Working averages + distributed all-reduce
├── scripts/
│ └── launch.sh # Multi-node torchrun wrapper
└── necessities.txt
This separation means you possibly can swap in an actual dataset by modifying solely dataset.py, or substitute the mannequin by modifying solely mannequin.py. The coaching loop by no means wants to vary.
4. Centralized Configuration
Laborious-coded hyperparameters are the enemy of reproducibility. We use a Python dataclass as our single supply of configuration. Each different module imports TrainingConfig and reads from it — nothing is hard-coded.
The dataclass doubles as our CLI parser: the from_args() classmethod introspects the sphere names and kinds, routinely constructing argparse flags with defaults. This implies you get –batch_size 128 and –no-use_amp without cost, with out writing a single parser line by hand.
@dataclass
class TrainingConfig:
"""Immutable bag of each parameter the coaching pipeline wants."""
# Mannequin
num_classes: int = 10
in_channels: int = 3
image_size: int = 32
# Information
batch_size: int = 64 # per-GPU
num_workers: int = 4
# Optimizer / Scheduler
epochs: int = 10
lr: float = 0.01
momentum: float = 0.9
weight_decay: float = 1e-4
# Distributed
backend: str = "nccl"
# Combined Precision
use_amp: bool = True
# Gradient Accumulation
grad_accum_steps: int = 1
# Checkpointing
checkpoint_dir: str = "./checkpoints"
save_every: int = 1
resume_from: Optionally available[str] = None
# Logging & Profiling
log_interval: int = 10
enable_profiling: bool = False
seed: int = 42
@classmethod
def from_args(cls) -> "TrainingConfig":
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
defaults = cls()
for title, val in vars(defaults).gadgets():
arg_type = sort(val) if val will not be None else str
if isinstance(val, bool):
parser.add_argument(f"--{title}", default=val,
motion=argparse.BooleanOptionalAction)
else:
parser.add_argument(f"--{title}", sort=arg_type, default=val)
return cls(**vars(parser.parse_args()))
Why a dataclass as an alternative of YAML or JSON? Three causes: (1) sort hints are enforced by the IDE and mypy, (2) there’s zero dependency on third-party config libraries, and (3) each parameter has a visual default proper subsequent to its declaration. For manufacturing programs that want hierarchical configs, you possibly can all the time layer Hydra or OmegaConf on prime of this sample.
5. Distributed Lifecycle Administration
The distributed lifecycle has three phases: initialise, run, and tear down. Getting any of those unsuitable can produce silent hangs, so we wrap every part in express error dealing with.
Course of Group Initialization
The setup_distributed() perform reads the three setting variables that torchrun units routinely (RANK, LOCAL_RANK, WORLD_SIZE), pins the proper GPU with torch.cuda.set_device(), and initialises the NCCL course of group. It returns a frozen dataclass — DistributedContext — that the remainder of the codebase passes round as an alternative of re-reading os.environ.
@dataclass(frozen=True)
class DistributedContext:
"""Immutable snapshot of the present course of's distributed identification."""
rank: int
local_rank: int
world_size: int
system: torch.system
def setup_distributed(config: TrainingConfig) -> DistributedContext:
required_vars = ("RANK", "LOCAL_RANK", "WORLD_SIZE")
lacking = [v for v in required_vars if v not in os.environ]
if lacking:
increase RuntimeError(
f"Lacking setting variables: {lacking}. "
"Launch with torchrun or set them manually.")
if not torch.cuda.is_available():
increase RuntimeError("CUDA is required for NCCL distributed coaching.")
rank = int(os.environ["RANK"])
local_rank = int(os.environ["LOCAL_RANK"])
world_size = int(os.environ["WORLD_SIZE"])
torch.cuda.set_device(local_rank)
system = torch.system("cuda", local_rank)
dist.init_process_group(backend=config.backend)
return DistributedContext(
rank=rank, local_rank=local_rank,
world_size=world_size, system=system)
Checkpointing with Rank Guards
The most typical distributed checkpointing bug is all ranks writing to the identical file concurrently. We guard saving behind is_main_process(), and loading behind dist.barrier() — this ensures rank 0 finishes writing earlier than different ranks try and learn.
def save_checkpoint(path, epoch, mannequin, optimizer, scaler=None, rank=0):
"""Persist coaching state to disk (rank-0 solely)."""
if not is_main_process(rank):
return
Path(path).father or mother.mkdir(dad and mom=True, exist_ok=True)
state = {
"epoch": epoch,
"model_state_dict": mannequin.module.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
}
if scaler will not be None:
state["scaler_state_dict"] = scaler.state_dict()
torch.save(state, path)
def load_checkpoint(path, mannequin, optimizer=None, scaler=None, system="cpu"):
"""Restore coaching state. All ranks load after barrier."""
dist.barrier() # await rank 0 to complete writing
ckpt = torch.load(path, map_location=system, weights_only=False)
mannequin.load_state_dict(ckpt["model_state_dict"])
if optimizer and "optimizer_state_dict" in ckpt:
optimizer.load_state_dict(ckpt["optimizer_state_dict"])
if scaler and "scaler_state_dict" in ckpt:
scaler.load_state_dict(ckpt["scaler_state_dict"])
return ckpt.get("epoch", 0)
6. Mannequin Design for DDP
We use a light-weight ResNet variant known as MiniResNet — three residual levels with rising channels (64, 128, 256), two blocks per stage, world common pooling, and a fully-connected head. It’s complicated sufficient to be practical however gentle sufficient to run on any {hardware}.
The crucial DDP requirement: the mannequin have to be moved to the proper GPU earlier than wrapping. DDP doesn’t transfer fashions for you.
def create_model(config: TrainingConfig, system: torch.system) -> nn.Module:
"""Instantiate a MiniResNet and transfer it to system."""
mannequin = MiniResNet(
in_channels=config.in_channels,
num_classes=config.num_classes,
)
return mannequin.to(system)
def wrap_ddp(mannequin: nn.Module, local_rank: int) -> DDP:
"""Wrap mannequin with DistributedDataParallel."""
return DDP(mannequin, device_ids=[local_rank])
Notice the two-step sample: create_model() → wrap_ddp(). This separation is intentional. When loading a checkpoint, you want the unwrapped mannequin (mannequin.module) to load state dicts, then re-wrap. In case you fuse creation and wrapping, checkpoint loading turns into awkward.
7. Distributed Information Loading
DistributedSampler is what ensures every GPU sees a singular slice of knowledge. It partitions indices throughout world_size ranks and returns a non-overlapping subset for every. With out it, each GPU would practice on equivalent batches — burning compute for zero profit.
There are three particulars that journey folks up:
First, sampler.set_epoch(epoch) have to be known as initially of each epoch. The sampler makes use of the epoch quantity as a random seed for shuffling. In case you overlook this, each epoch will iterate over information in the identical order, which degrades generalisation.
Second, pin_memory=True within the DataLoader pre-allocates page-locked host reminiscence, enabling asynchronous CPU-to-GPU transfers if you name tensor.to(system, non_blocking=True). This overlap is the place actual throughput features come from.
Third, persistent_workers=True avoids respawning employee processes each epoch — a major overhead discount when num_workers > 0.
def create_distributed_dataloader(dataset, config, ctx):
sampler = DistributedSampler(
dataset,
num_replicas=ctx.world_size,
rank=ctx.rank,
shuffle=True,
)
loader = DataLoader(
dataset,
batch_size=config.batch_size,
sampler=sampler,
num_workers=config.num_workers,
pin_memory=True,
drop_last=True,
persistent_workers=config.num_workers > 0,
)
return loader, sampler
8. The Coaching Loop — The place It All Comes Collectively
That is the center of the pipeline. The loop beneath integrates each element we’ve constructed thus far: DDP-wrapped mannequin, distributed information loader, combined precision, gradient accumulation, rank-aware logging, studying charge scheduling, and checkpointing.

Combined Precision (AMP)
Computerized Combined Precision (AMP) retains grasp weights in FP32 however runs the ahead go and loss computation in FP16. This halves reminiscence bandwidth necessities and permits Tensor Core acceleration on trendy NVIDIA GPUs, usually yielding a 1.5–2x throughput enchancment with negligible accuracy impression.
We use torch.autocast for the ahead go and torch.amp.GradScaler for loss scaling. A subtlety: we create the GradScaler with enabled=config.use_amp. When disabled, the scaler turns into a no-op — similar code path, zero overhead, no branching.
Gradient Accumulation
Typically you want a bigger efficient batch dimension than your GPU reminiscence permits. Gradient accumulation simulates this by operating a number of forward-backward passes earlier than stepping the optimizer. The secret is to divide the loss by grad_accum_steps earlier than backward(), so the amassed gradient is appropriately averaged.
def train_one_epoch(mannequin, loader, criterion, optimizer, scaler, ctx, config, epoch, logger):
mannequin.practice()
tracker = MetricTracker()
total_steps = len(loader)
use_amp = config.use_amp and ctx.system.sort == "cuda"
autocast_ctx = torch.autocast("cuda", dtype=torch.float16) if use_amp else nullcontext()
optimizer.zero_grad(set_to_none=True)
for step, (pictures, labels) in enumerate(loader):
pictures = pictures.to(ctx.system, non_blocking=True)
labels = labels.to(ctx.system, non_blocking=True)
with autocast_ctx:
outputs = mannequin(pictures)
loss = criterion(outputs, labels)
loss = loss / config.grad_accum_steps # scale for accumulation
scaler.scale(loss).backward()
if (step + 1) % config.grad_accum_steps == 0:
scaler.step(optimizer)
scaler.replace()
optimizer.zero_grad(set_to_none=True) # memory-efficient reset
# Observe uncooked (unscaled) loss for logging
raw_loss = loss.merchandise() * config.grad_accum_steps
acc = compute_accuracy(outputs, labels)
tracker.replace("loss", raw_loss, n=pictures.dimension(0))
tracker.replace("accuracy", acc, n=pictures.dimension(0))
if is_main_process(ctx.rank) and (step + 1) % config.log_interval == 0:
log_training_step(logger, epoch, step + 1, total_steps,
raw_loss, optimizer.param_groups[0]["lr"])
return tracker
Two particulars price highlighting. First, zero_grad(set_to_none=True) deallocates gradient tensors as an alternative of filling them with zeros, saving reminiscence proportional to the mannequin dimension. Second, information is moved to the GPU with non_blocking=True — this enables the CPU to proceed filling the subsequent batch whereas the present one transfers, exploiting the pin_memory overlap.
The Foremost Perform
The primary() perform orchestrates the complete pipeline. Notice the attempt/lastly sample guaranteeing that the method group is torn down even when an exception happens — with out this, a crash on one rank can depart different ranks hanging indefinitely.
def predominant():
config = TrainingConfig.from_args()
ctx = setup_distributed(config)
logger = setup_logger(ctx.rank)
torch.manual_seed(config.seed + ctx.rank)
mannequin = create_model(config, ctx.system)
mannequin = wrap_ddp(mannequin, ctx.local_rank)
optimizer = torch.optim.SGD(mannequin.parameters(), lr=config.lr,
momentum=config.momentum,
weight_decay=config.weight_decay)
scheduler = CosineAnnealingLR(optimizer, T_max=config.epochs)
scaler = torch.amp.GradScaler(enabled=config.use_amp)
start_epoch = 1
if config.resume_from:
start_epoch = load_checkpoint(config.resume_from, mannequin.module,
optimizer, scaler, ctx.system) + 1
dataset = SyntheticImageDataset(dimension=50000, image_size=config.image_size,
num_classes=config.num_classes)
loader, sampler = create_distributed_dataloader(dataset, config, ctx)
criterion = nn.CrossEntropyLoss()
attempt:
for epoch in vary(start_epoch, config.epochs + 1):
sampler.set_epoch(epoch)
tracker = train_one_epoch(mannequin, loader, criterion, optimizer,
scaler, ctx, config, epoch, logger)
scheduler.step()
avg_loss = all_reduce_scalar(tracker.common("loss"),
ctx.world_size, ctx.system)
if is_main_process(ctx.rank):
log_epoch_summary(logger, epoch, {"loss": avg_loss})
if epoch % config.save_every == 0:
save_checkpoint(f"checkpoints/epoch_{epoch}.pt",
epoch, mannequin, optimizer, scaler, ctx.rank)
lastly:
cleanup_distributed()
9. Launching Throughout Nodes
PyTorch’s torchrun (launched in v1.10 as a alternative for torch.distributed.launch) handles spawning one course of per GPU and setting the RANK, LOCAL_RANK, and WORLD_SIZE setting variables. For multi-node coaching, each node should specify the grasp node’s deal with so that each one processes can set up the NCCL connection.
Right here is our launch script, which reads all tunables from setting variables:
#!/usr/bin/env bash
set -euo pipefail
NNODES="${NNODES:-2}"
NPROC_PER_NODE="${NPROC_PER_NODE:-4}"
NODE_RANK="${NODE_RANK:-0}"
MASTER_ADDR="${MASTER_ADDR:-127.0.0.1}"
MASTER_PORT="${MASTER_PORT:-12355}"
torchrun
--nnodes="${NNODES}"
--nproc_per_node="${NPROC_PER_NODE}"
--node_rank="${NODE_RANK}"
--master_addr="${MASTER_ADDR}"
--master_port="${MASTER_PORT}"
practice.py "$@"
For a fast single-node take a look at on one GPU:
torchrun --standalone --nproc_per_node=1 practice.py --epochs 2
For 2-node coaching with 4 GPUs every, run on Node 0:
MASTER_ADDR=10.0.0.1 NODE_RANK=0 NNODES=2 NPROC_PER_NODE=4 bash scripts/launch.sh
And on Node 1:
MASTER_ADDR=10.0.0.1 NODE_RANK=1 NNODES=2 NPROC_PER_NODE=4 bash scripts/launch.sh

10. Efficiency Pitfalls and Ideas
After constructing a whole bunch of distributed coaching jobs, these are the errors I see most frequently:
Forgetting sampler.set_epoch(). With out it, information order is equivalent each epoch. That is the only commonest DDP bug and it silently hurts convergence.
CPU-GPU switch bottleneck. At all times use pin_memory=True in your DataLoader and non_blocking=True in your .to() calls. With out these, the CPU blocks on each batch switch.
Logging from all ranks. If each rank prints, output is interleaved rubbish. Guard all logging behind rank == 0 checks.
zero_grad() with out set_to_none=True. The default zero_grad() fills gradient tensors with zeros. set_to_none=True deallocates them as an alternative, decreasing peak reminiscence.
Saving checkpoints from all ranks. A number of ranks writing the identical file causes corruption. Solely rank 0 ought to save, and all ranks ought to barrier earlier than loading.
Not seeding with rank offset. torch.manual_seed(seed + rank) ensures every rank’s information augmentation is completely different. With out the offset, augmentations are equivalent throughout GPUs.
When NOT to make use of DDP
DDP replicates all the mannequin on each GPU. In case your mannequin doesn’t slot in a single GPU’s reminiscence, DDP alone won’t assist. For such circumstances, look into Absolutely Sharded Information Parallel (FSDP), which shards parameters, gradients, and optimizer states throughout ranks, or frameworks like DeepSpeed ZeRO.
11. Conclusion
We’ve gone from a single-GPU coaching mindset to a totally distributed, production-grade pipeline able to scaling throughout machines — with out sacrificing readability or maintainability.
However extra importantly, this wasn’t nearly making DDP work. It was about constructing it appropriately.
Let’s distill a very powerful takeaways:
Key Takeaways
- DDP is deterministic engineering, not magic
When you perceive course of teams, ranks, and all-reduce, distributed coaching turns into predictable and debuggable. - Construction issues greater than scale
A clear, modular codebase (config → information → mannequin → coaching → utils) is what makes scaling from 1 GPU to 100 GPUs possible. - Appropriate information sharding is non-negotiable
DistributedSampler + set_epoch() is the distinction between true scaling and wasted compute. - Efficiency comes from small particulars
pin_memory, non_blocking, set_to_none=True, and AMP collectively ship large throughput features. - Rank-awareness is important
Logging, checkpointing, and randomness should all respect rank — in any other case you get chaos. - DDP scales compute, not reminiscence
In case your mannequin doesn’t match on one GPU, you want FSDP or ZeRO — no more GPUs.
The Greater Image
What you’ve constructed right here is not only a coaching script — it’s a template for real-world ML programs.
This precise sample is utilized in:
- Manufacturing ML pipelines
- Analysis labs coaching giant fashions
- Startups scaling from prototype to infrastructure
And the perfect half?
Now you can:
- Plug in an actual dataset
- Swap in a Transformer or customized structure
- Scale throughout nodes with zero code adjustments
What to Discover Subsequent
When you’re snug with this setup, the subsequent frontier is memory-efficient and large-scale coaching:
- Absolutely Sharded Information Parallel (FSDP) → shard mannequin + gradients
- DeepSpeed ZeRO → shard optimizer states
- Pipeline Parallelism → cut up fashions throughout GPUs
- Tensor Parallelism → cut up layers themselves
These strategies energy at the moment’s largest fashions — however all of them construct on the precise DDP basis you now perceive.
Distributed coaching usually feels intimidating — not as a result of it’s inherently complicated, however as a result of it’s hardly ever offered as an entire system.
Now you’ve seen the complete image.
And when you see it end-to-end…
Scaling turns into an engineering determination, not a analysis downside.
What’s Subsequent
This pipeline handles data-parallel coaching — the most typical distributed sample. When your fashions outgrow single-GPU reminiscence, discover Absolutely Sharded Information Parallel (FSDP) for parameter sharding, or DeepSpeed ZeRO for optimizer-state partitioning. For actually large fashions, pipeline parallelism (splitting the mannequin throughout GPUs layer by layer) and tensor parallelism (splitting particular person layers) grow to be mandatory.
However for the overwhelming majority of coaching workloads — from ResNets to medium-scale Transformers — the DDP pipeline we constructed right here is strictly what manufacturing groups use. Scale it by including nodes and GPUs; the code handles the remainder.
The whole, production-ready codebase for this venture is accessible right here: pytorch-multinode-ddp
References
[1] PyTorch Distributed Overview, PyTorch Documentation (2024), https://pytorch.org/tutorials/newbie/dist_overview.html
[2] S. Li et al., PyTorch Distributed: Experiences on Accelerating Information Parallel Coaching (2020), VLDB Endowment
[3] PyTorch DistributedDataParallel API, https://pytorch.org/docs/secure/generated/torch.nn.parallel.DistributedDataParallel.html
[4] NCCL: Optimized primitives for collective multi-GPU communication, NVIDIA, https://developer.nvidia.com/nccl
[5] PyTorch AMP: Computerized Combined Precision, https://pytorch.org/docs/secure/amp.html

