Assess Output Quality
CommunityScore LLM output quality
Software Engineering#code review#prompt engineering#llm evaluation#quality assessment#output validation#iteration guidance
AuthorHermeticOrmus
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill automates the evaluation of LLM-generated output against specific task requirements, determining if further iteration is needed or if the solution is complete.
Core Features & Use Cases
- Automated Quality Scoring: Assigns a numerical score (0.0-1.0) based on correctness, completeness, clarity, and overall quality.
- Actionable Feedback: Identifies specific strengths and gaps in the output.
- Use Case: After an AI generates code for a new feature, use this Skill to get an objective assessment of its quality and decide whether to proceed to deployment or request revisions.
Quick Start
Use the assess-quality skill to evaluate the output in the file 'output.md' against the original task description.
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: Assess Output Quality Download link: https://github.com/HermeticOrmus/hermetic-claude/archive/main.zip#assess-output-quality Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.