llm-training
CommunityMaster LLM training and finetuning.
Authoreyadsibai
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill provides a comprehensive guide to various frameworks and techniques for training and finetuning large language models, simplifying complex distributed training setups.
Core Features & Use Cases
- Framework Comparison: Offers insights into Accelerate, DeepSpeed, PyTorch Lightning, Ray Train, TRL, and Unsloth, highlighting their best use cases, multi-GPU support, and memory efficiency.
- Memory Optimization: Details techniques like gradient checkpointing, mixed precision, quantization, and flash attention to reduce memory footprint during training.
- Decision Guide: Helps users select the most appropriate framework based on their specific scenario, model size, and performance requirements.
- Use Case: When faced with training a 70B+ parameter model, this Skill guides you to use DeepSpeed ZeRO-3 for optimal memory savings.
Quick Start
Use the llm-training skill to compare DeepSpeed and Accelerate for distributed training.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: llm-training Download link: https://github.com/eyadsibai/ltk/archive/main.zip#llm-training Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.