llm-evaluation
CommunityAutomated and human evaluation for LLMs.
Author48Nauts-Operator
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking.
Core Features & Use Cases
- Automated metrics (BLEU, ROUGE, METEOR, BERTScore, Perplexity)
- Human evaluation dimensions (Accuracy, Coherence, Relevance, Fluency, Safety)
- LLM-as-judge patterns (single output, pairwise)
Quick Start
Create an evaluation suite with BLEU, ROUGE, and human ratings for a QA dataset.
Dependency Matrix
Required Modules
None requiredComponents
assetsreferencesscripts
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: llm-evaluation Download link: https://github.com/48Nauts-Operator/opencode-baseline/archive/main.zip#llm-evaluation Please download this .zip file, extract it, and install it in the .claude/skills/ directory.