pytorch-fsdp2

Community

Master PyTorch FSDP2 for large models.

AuthorDoanNgocCuong
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill enables coding agents to correctly integrate PyTorch's Fully Sharded Data Parallel (FSDP2) into training scripts, overcoming challenges with model memory limits and distributed training complexities.

Core Features & Use Cases

  • FSDP2 Integration: Adds PyTorch FSDP2 (fully_shard) with proper initialization, sharding, mixed precision, and distributed checkpointing.
  • Memory Optimization: Essential for training models that exceed single-GPU memory capacity.
  • DTensor-based Sharding: Leverages DTensor for inspectable, per-parameter sharding, composable with DeviceMesh.
  • Use Case: When training a large language model that requires more VRAM than available on a single GPU, this Skill ensures FSDP2 is applied correctly to shard parameters, gradients, and optimizer states across multiple GPUs or nodes.

Quick Start

Use the pytorch-fsdp2 skill to add PyTorch FSDP2 to your existing training script, ensuring correct initialization, sharding, and checkpointing.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: pytorch-fsdp2
Download link: https://github.com/DoanNgocCuong/continuous-training-pipeline_T3_2026/archive/main.zip#pytorch-fsdp2

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.