ai-evals
CommunityDesign AI evals for confident shipping.
Product & Management#quality assurance#error analysis#ai evaluation#llm testing#rubric design#test set generation
Authoroldwinter
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill provides a structured framework for designing and executing evaluations of AI/LLM features, ensuring quality, safety, and performance before deployment.
Core Features & Use Cases
- Eval PRD Creation: Define clear evaluation requirements, scope, and acceptance thresholds.
- Test Set & Taxonomy Development: Build golden test sets and error taxonomies from failure analysis.
- Rubric & Judge Planning: Design scoring rubrics and select appropriate judging approaches (human, LLM-as-judge).
- Use Case: You've developed a new AI assistant for customer support. Use this Skill to create a comprehensive evaluation plan, including test cases, a scoring rubric, and a process for analyzing results to ensure it meets quality and safety standards before launch.
Quick Start
Use the ai-evals skill to design an evaluation plan for a new AI feature.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ai-evals Download link: https://github.com/oldwinter/skills/archive/main.zip#ai-evals Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.