nox-preference-learner
CommunityTrain your agent by talking.
Software Engineering#ai agent#context injection#rlhf#behavioral learning#preference learning#conversation feedback
Authorrockywuest
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill allows users to directly influence and shape an AI agent's behavior through natural conversation, adapting its responses without requiring model retraining.
Core Features & Use Cases
- Behavioral Adaptation: Learns preferences across 6 dimensions (autonomy, verbosity, proactivity, formality, technical depth, confirmation seeking) based on user feedback.
- RLHF-lite: Implements a lightweight Reinforcement Learning from Human Feedback loop by injecting learned preferences into the agent's context.
- Use Case: If an agent is too verbose, a user can say "Be more concise" or "TLDR". The agent learns this preference and adjusts its future responses to be shorter.
Quick Start
Use the nox-preference-learner skill to adjust the agent's verbosity to be less verbose.
Dependency Matrix
Required Modules
None requiredComponents
scripts
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: nox-preference-learner Download link: https://github.com/rockywuest/openclaw-memory-local/archive/main.zip#nox-preference-learner Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.