miles-rl-training

Community

Enterprise RL for large-scale MoE models

AuthorDoanNgocCuong
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill provides a robust framework for training large-scale Mixture-of-Experts (MoE) models, addressing challenges in stability, low-precision training, and train-inference alignment crucial for enterprise-grade reinforcement learning.

Core Features & Use Cases

  • Low-Precision Training: Supports FP8 and INT4 quantization-aware training for massive models.
  • Performance Optimizations: Includes speculative RL for increased throughput and efficient weight synchronization.
  • Train-Inference Alignment: Ensures consistency between training and inference pipelines using techniques like R3 and TIS.
  • Use Case: Train a 1TB+ MoE model like DeepSeek V3 or Qwen3-MoE efficiently using FP8 precision on H100/H200 GPUs, ensuring bit-wise identical train-inference alignment for production deployment.

Quick Start

Use the miles skill to train a Qwen3-30b-a3b model with the grpo advantage estimator and a batch size of 512.

Dependency Matrix

Required Modules

sglang-routerraytorchtransformers

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: miles-rl-training
Download link: https://github.com/DoanNgocCuong/continuous-training-pipeline_T3_2026/archive/main.zip#miles-rl-training

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.