training-migration
CommunityMigrate PyTorch training to Ascend NPU for speed.
AuthorFeRhodium
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill guides the migration of CUDA-based PyTorch training code to Ascend NPU, enabling optimized training with AMP, distributed execution, and robust checkpointing.
Core Features & Use Cases
- Training loop migration for NPU-accelerated training
- Mixed-precision training with torch_npu AMP
- Data loading and preprocessing optimizations for NPU
- Distributed training setup using HCCL
- Gradient accumulation and clipping for large batches
- Checkpointing and resume across runs
- Performance monitoring and profiling for Ascend
Quick Start
Begin by scanning your PyTorch project for training scripts, configure the migration workflow, and execute the migration. After migration, run the produced training code with AMP enabled, verify results with a lightweight test run, and use profiling hooks to validate performance improvements.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: training-migration Download link: https://github.com/FeRhodium/ascend-migration/archive/main.zip#training-migration Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.