evaluation
CommunityMeasure and improve agent performance.
Software Engineering#quality assurance#llm-as-judge#evaluation#performance metrics#rubrics#agent testing
Author466852675
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill provides a framework for systematically evaluating the performance and quality of AI agents, enabling continuous improvement and validation of context engineering choices.
Core Features & Use Cases
- Multi-dimensional Rubrics: Define and apply rubrics covering factual accuracy, completeness, citation accuracy, source quality, and tool efficiency.
- LLM-as-Judge & Human Evaluation: Supports both automated and manual evaluation methodologies.
- Test Set Management: Tools for creating, filtering, and analyzing test sets stratified by complexity.
- Production Monitoring: Features to sample and track agent performance in live environments.
- Use Case: A team developing a customer support agent can use this skill to create a test suite of common queries, evaluate the agent's responses against a defined rubric, and monitor its pass rate in production to catch regressions.
Quick Start
Use the evaluation skill to run a comprehensive performance test suite against the agent.
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: evaluation Download link: https://github.com/466852675/TISHICIKU-2025/archive/main.zip#evaluation Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.