unsloth-training
CommunityFine-tune LLMs efficiently with RL and SFT.
AuthorScientiaCapital
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Fine-tune large language models efficiently using Unsloth via GRPO reinforcement learning or supervised fine-tuning, reducing memory and time costs while enabling advanced features like FP8, vision fine-tuning, and mobile deployment.
Core Features & Use Cases
- GRPO RL training with reward design and LoRA adapters
- SFT training with packing and long-context options
- FP8 training for significant VRAM savings
- Vision fine-tuning for VLM tasks
- Docker-based training and reproducible environments
- Mobile deployment support through QAT/ExecuTorch export
- GGUF export options for various serving backends
- End-to-end pipeline from data prep to export and deployment
Quick Start
Run the GRPO training script with a small dataset to start RL fine-tuning.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: unsloth-training Download link: https://github.com/ScientiaCapital/skills/archive/main.zip#unsloth-training Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.