web-search-agent-evals
OfficialAutomate and compare web-search agent benchmarks
Authoryoudotcom-oss
Version1.0.0
Installs0
System Documentation
What problem does it solve?
The Web Search Agent Evaluations skill coordinates automated testing of multiple CLI web search agents across isolated Docker containers to enable reproducible benchmarking and insights.
Core Features & Use Cases
- Headless adapters and harness-driven evaluation pipelines for 4 agents (Claude Code, Gemini, Droid, Codex) with 2 search providers (builtin and MCP) to generate 8 experiment pairings.
- Type-safe configuration constants, MCP server definitions, and a Bun-based entrypoint to orchestrate runs in containers.
- Rich results, summaries, and prompts-management tooling to support iterative analysis, comparisons, and publication-quality reports.
Quick Start
Run bun run to start the evaluation workflow across all agent/tool combinations.
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferencesassets
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: web-search-agent-evals Download link: https://github.com/youdotcom-oss/web-search-agent-evals/archive/main.zip#web-search-agent-evals Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.