torch-pipeline-parallelism

Official

Scale training with PyTorch pipeline parallelism.

Authorletta-ai
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill provides guidance for implementing PyTorch pipeline parallelism for distributed training of large language models. It covers model partitioning, inter-rank communication, gradient flow management, and common pitfalls.

Core Features & Use Cases

  • Model partitioning: Split transformer layers across ranks.
  • Inter-rank communication: Send/recv and activation caching.
  • Gradient flow management: AFAB scheduling, loss scaling, and backward synchronization.

Quick Start

Implement a 3-stage pipeline across 4 GPUs and validate gradient flow.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: torch-pipeline-parallelism
Download link: https://github.com/letta-ai/skills/archive/main.zip#torch-pipeline-parallelism

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository