promptfoo-evaluation

Community

Benchmark prompts with structured tests.

Authoraleister1102
Version1.0.0
Installs0

System Documentation

What problem does it solve?

Prompt engineering and model benchmarking can be inconsistent and hard to reproduce across teams. This skill provides a structured approach to configure, run, and compare LLM evaluations using Promptfoo, enabling reproducible results.

Core Features & Use Cases

  • Structured evaluation pipelines: configure prompts, tests, and providers in a single project.
  • Automated grading with Python assertions and LLM rubrics for objective comparisons.
  • Model-to-model and provider comparisons across multiple scenarios for QA, research, or product validation.

Quick Start

Initialize a Promptfoo project, load prompts and tests, and run the evaluation to produce results.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: promptfoo-evaluation
Download link: https://github.com/aleister1102/skills/archive/main.zip#promptfoo-evaluation

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.