eval-runner
CommunityBenchmark AI code generation quality.
Authorkaelig
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill automates the rigorous evaluation of AI-generated code against design specifications, ensuring quality, correctness, and adherence to standards.
Core Features & Use Cases
- Automated Evaluation: Runs generated code through a suite of deterministic and LLM-based graders.
- Benchmarking: Produces detailed reports on compilation, linting, semantic correctness, accessibility, and more.
- Use Case: After an AI model generates a React component, use this Skill to automatically test its compilation, check for linting errors, verify it uses design tokens correctly, and assess its accessibility compliance, providing a comprehensive quality score.
Quick Start
Run the eval-runner skill to execute the react-craft eval suite against the fixture located at /path/to/fixture.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: eval-runner Download link: https://github.com/kaelig/react-craft/archive/main.zip#eval-runner Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.