af-skill-write-agent-benchmarks
CommunityBenchmark AI agents with evidence-based tests.
Authorkorchasa
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Benchmark AI agents under controlled, reproducible conditions to verify performance.
Core Features & Use Cases
- End-to-end benchmarking with a standard evidence-based protocol.
- Deterministic evaluation in isolated sandboxes with a judge and trace.
- Applicable to coding, data analysis, and conversational agents in production-like scenarios.
Quick Start
Define a benchmarking goal, design an isolated sandbox, create a scenario, and run it through the Runner to obtain a structured verdict.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: af-skill-write-agent-benchmarks Download link: https://github.com/korchasa/ide-rules/archive/main.zip#af-skill-write-agent-benchmarks Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.