advanced-evaluation
CommunityLLM-based evaluation patterns for scale.
Authorgeorgeguimaraes
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This skill provides production-grade evaluation techniques for assessing LLM outputs, including direct scoring, pairwise comparison with bias mitigation, rubric design, and calibration.
Core Features & Use Cases
- Direct Scoring with structured prompts and justification
- Pairwise Comparison with position-swapping bias mitigation
- Rubric Generation for consistent evaluation
- Bias Mitigation and Confidence Calibration
- Cost-aware, multi-model evaluation patterns
Quick Start
Run the evaluation demo to see multiple evaluation patterns: Command: python dot_claude/skills/context-engineering/advanced-evaluation/scripts/evaluation_example.py
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: advanced-evaluation Download link: https://github.com/georgeguimaraes/dotfiles/archive/main.zip#advanced-evaluation Please download this .zip file, extract it, and install it in the .claude/skills/ directory.