pairwise-compare-evals

Official

Rank evals by pairwise criteria with justification.

AuthorEquiStamp
Version1.0.0
Installs0

System Documentation

What problem does it solve?

Pairwise-compare-evals provides a structured method to rank AI safety evaluations by systematically contrasting each eval across a fixed set of criteria using the Saaty scale, producing a transparent justification trail.

Core Features & Use Cases

  • Compares all assessed evals head-to-head on 10 criteria (7 rubric dimensions + 3 porting criteria) to generate per-pair scores and overall rankings.
  • Generates batch prompts and aggregates results into a matrix and summary rankings, enabling data-driven prioritization of evals.
  • Use Case: run the full evaluation comparison workflow to surface top-performing evals and identify gaps in coverage or feasibility.

Quick Start

Run the full pairwise evaluation workflow to generate batches and start processing.

Dependency Matrix

Required Modules

PyYAML

Components

scriptsassets

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: pairwise-compare-evals
Download link: https://github.com/EquiStamp/evaluating-evaluations/archive/main.zip#pairwise-compare-evals

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.