rag-eval

Community

Quantify RAG quality with metrics and benchmarks.

Authorfloflo777
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill enables rigorous evaluation of RAG systems by measuring retrieval, generation, and latency metrics to ensure quality and reliability.

Core Features & Use Cases

  • Local evaluation: Run tests against your own dataset without external services to obtain recall, precision, MRR, and NDCG for retrieval; assess generation faithfulness, relevance, coherence, and conciseness.
  • Ailog benchmarking (optional): Compare your system against Ailog's production RAG API to gain a competitive baseline.
  • Latency & end-to-end profiling: Measure end-to-end performance from retrieval to generation to identify bottlenecks.

Quick Start

Run a local evaluation with a prepared test dataset, then review the metrics in the generated report. If you have an Ailog API key, enable the benchmark to compare results against Ailog.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: rag-eval
Download link: https://github.com/floflo777/claude-rag-skills/archive/main.zip#rag-eval

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.