hyperparameter-optimization

Community

Unified PPO hyperparam and reward-weight tuning.

Authormzqef
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill automates the joint tuning of PPO hyperparameters and reward/penalty weights, enabling a single, automated search to improve training efficiency, stability, and policy performance in robotic navigation tasks.

Core Features & Use Cases

  • Unified AutoML for PPO parameters (learning rate, entropy, clipping, epochs) and reward scales
  • Supports grid, random, and Bayesian search strategies with constraint-based filtering to skip unstable configurations
  • Provides a ready-to-run Quick Start and analysis tooling to compare configurations and export best results

Quick Start

uv run starter_kit_schedule/scripts/automl.py --mode stage --budget-hours 12 --hp-trials 8 Get progress with uv run starter_kit_schedule/progress/automl_state.yaml

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: hyperparameter-optimization
Download link: https://github.com/mzqef/MotrixLab/archive/main.zip#hyperparameter-optimization

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.