pytorch-fsdp2
CommunityMaster PyTorch FSDP2 for large models.
Software Engineering#llm#checkpointing#pytorch#distributed training#mixed precision#fsdp2#model sharding
AuthorDoanNgocCuong
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill enables coding agents to correctly integrate PyTorch's Fully Sharded Data Parallel (FSDP2) into training scripts, overcoming challenges with model memory limits and distributed training complexities.
Core Features & Use Cases
- FSDP2 Integration: Adds PyTorch FSDP2 (
fully_shard) with proper initialization, sharding, mixed precision, and distributed checkpointing. - Memory Optimization: Essential for training models that exceed single-GPU memory capacity.
- DTensor-based Sharding: Leverages DTensor for inspectable, per-parameter sharding, composable with DeviceMesh.
- Use Case: When training a large language model that requires more VRAM than available on a single GPU, this Skill ensures FSDP2 is applied correctly to shard parameters, gradients, and optimizer states across multiple GPUs or nodes.
Quick Start
Use the pytorch-fsdp2 skill to add PyTorch FSDP2 to your existing training script, ensuring correct initialization, sharding, and checkpointing.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: pytorch-fsdp2 Download link: https://github.com/DoanNgocCuong/continuous-training-pipeline_T3_2026/archive/main.zip#pytorch-fsdp2 Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.