validate-evaluator

Community

Calibrate LLM judges against human labels.

Authorhamelsmu
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill ensures that your LLM-based judges are accurately evaluating outputs by calibrating them against human judgment, preventing biased or unreliable assessments.

Core Features & Use Cases

  • LLM Judge Calibration: Fine-tune LLM judges to align with human-defined Pass/Fail criteria.
  • Performance Measurement: Quantify judge accuracy using True Positive Rate (TPR) and True Negative Rate (TNR).
  • Bias Correction: Apply statistical methods to estimate the true success rate of the judge on production data.
  • Use Case: After developing a judge prompt to evaluate customer support responses, use this skill to test its performance against human-labeled examples, ensuring it correctly identifies both good and bad responses before deploying it.

Quick Start

Use the validate-evaluator skill to calibrate the LLM judge against the provided human-labeled dataset.

Dependency Matrix

Required Modules

sklearnnumpyjudgy

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: validate-evaluator
Download link: https://github.com/hamelsmu/evals-skills/archive/main.zip#validate-evaluator

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.