rlhf
CommunityAlign language models with human feedback.
Software Engineering#reinforcement-learning#dpo#rlhf#human-feedback#policy-optimization#preference-data#reward-modeling
Authoritsmostafa
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Aligns language models with human preferences to produce safer, more helpful outputs by integrating human feedback into model training and evaluation.
Core Features & Use Cases
- Preference data collection and labeling workflows
- Reward modeling for scoring outputs
- Policy optimization (PPO/DPO) and direct alignment techniques
- End-to-end RLHF pipelines from SFT to aligned deployment
Quick Start
Train a baseline SFT model, collect human preferences, train a reward model, and run PPO or DPO to obtain an aligned policy.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: rlhf Download link: https://github.com/itsmostafa/llm-engineering-skills/archive/main.zip#rlhf Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.