rloo

Community

Lower-variance RL with leave-one-out baselines.

Authoratrawog
Version1.0.0
Installs0

System Documentation

What problem does it solve?

Reinforcement learning model training often suffers from high gradient variance, especially in policy optimization with sparse or delayed rewards. RLOO uses leave-one-out baselines to stabilize training and improve sample efficiency.

Core Features & Use Cases

  • RLOOTrainer and RLOOConfig for variance-reduced RLHF training
  • Reward function integration using completion_ids for efficient token-based rewards
  • Thinking-aware patterns and stable policy optimization for reasoning tasks

Quick Start

Run a small RLOO training session with a short dataset using RLOOTrainer and the default RLOOConfig

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: rloo
Download link: https://github.com/atrawog/overthink-plugins/archive/main.zip#rloo

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.