torch-pipeline-parallelism

Community

Scale LLM training with PyTorch.

AuthorZurybr
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill addresses the challenge of training large language models that exceed the memory capacity of a single GPU by providing a structured approach to implementing PyTorch pipeline parallelism.

Core Features & Use Cases

  • Model Partitioning: Distributes model layers across multiple GPUs.
  • Inter-Rank Communication: Manages tensor and gradient flow between stages.
  • AFAB Scheduling: Implements the All-Forward-All-Backward execution strategy.
  • Use Case: When training a multi-billion parameter LLM, this skill helps partition the model across a cluster of GPUs, enabling training that would otherwise be impossible.

Quick Start

Implement PyTorch pipeline parallelism for distributed LLM training using the provided guidance.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: torch-pipeline-parallelism
Download link: https://github.com/Zurybr/lefarma-skills/archive/main.zip#torch-pipeline-parallelism

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.