evaluate-attempt
OfficialValidate benchmark attempts with precise metrics.
Authorbrazil-bench
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This SOP evaluates a completed brazil-bench attempt against the spec.md requirements, capturing metrics for comparison across orchestration patterns. It ensures consistent evaluation across Python and Swift/iOS projects.
Core Features & Use Cases
- Conformance Metrics: automatically verify alignment with the benchmark spec and produce a structured report.
- Multi-language Support: supports Python and Swift/iOS implementations, adapting checks to the language and tooling.
- Use Case: after an attempt is completed, run this Skill to generate a leaderboard-ready report detailing conformance, test outcomes, and development timeline.
Quick Start
Run the evaluate-attempt skill against a repository and specify an output directory, for example: evaluate-attempt --attempt_repo=attempt-3 --output_dir=./results
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: evaluate-attempt Download link: https://github.com/brazil-bench/pourpoise/archive/main.zip#evaluate-attempt Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.