eval-model-comparison
CommunityBenchmark OCR and form-filling accuracy.
AuthorJustinChaney2023
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill enables rigorous benchmarking of OCR, transcription, and LLM form-filling accuracy against gold references to guide model selection and benchmarking reports.
Core Features & Use Cases
- Benchmarking: Compare across OCR, transcription, and LLM form-filling pipelines using field-level metrics and hallucination checks.
- Model Selection: Provide per-field confusion summaries and a model-comparison report template to inform choices among models and settings.
- Use Case: Build a benchmark with typed notes as references to evaluate system reliability for medical notes or structured forms.
Quick Start
Run the eval-model-comparison workflow on a sample dataset to generate the initial model comparison report.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: eval-model-comparison Download link: https://github.com/JustinChaney2023/orate/archive/main.zip#eval-model-comparison Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.