anthropic-evaluations

Community

Design and run robust AI agent evaluations.

Authordwmkerr
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This skill provides a framework for designing and evaluating AI agent tasks using Anthropic-style eval patterns. It helps teams define standardized graders, templates, and evaluation harnesses to ensure robust QA across coding, conversational, and research agents.

Core Features & Use Cases

  • Grader types including code-based, model-based, and human graders.
  • Templates and references: YAML templates for coding and conversational evals, plus reference materials.
  • Use cases span building evaluation suites, benchmarking agent capabilities, and QA for research agents.

Quick Start

Review the references/ directory for evaluation patterns and starter templates such as coding-agent-eval.yaml or conversational-agent-eval.yaml. Adapt a template to your task, configure your graders and tracked metrics, and run your evaluation harness to collect transcripts, tool usage, and pass/fail signals.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: anthropic-evaluations
Download link: https://github.com/dwmkerr/claude-toolkit/archive/main.zip#anthropic-evaluations

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.