evaluation-metrics

Community

Rigorous, reproducible LLM evaluation.

Authorricardoroche
Version1.0.0
Installs0

System Documentation

What problem does it solve?

When evaluating LLM performance, follow patterns for rigorous, reproducible evaluation, including well-structured datasets and objective metrics.

Core Features & Use Cases

  • Evaluation Dataset: Define datasets with examples and metadata.
  • Evaluation Metrics: Implement exact-match and token-overlap metrics, plus tooling to aggregate results.
  • Experiment Tracking: Plan and track A/B tests and model comparisons.

Quick Start

Create an evaluation dataset named 'summarization_eval' and save it as 'eval_data/summarization_v1.json'. Then compute ExactMatch and TokenOverlap metrics on your predictions.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: evaluation-metrics
Download link: https://github.com/ricardoroche/ricardos-claude-code/archive/main.zip#evaluation-metrics

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository