Agent Output Quality Verification Engine (품질 검증 엔진)
CommunityAutomate LLM output quality checks.
Software Engineering#quality assurance#data validation#automated testing#pipeline automation#llm verification#evidence scoring
Authorsabyunrepo
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill automates the comprehensive quality verification of Large Language Model (LLM) outputs across an entire pipeline, ensuring accuracy, relevance, and adherence to quality standards.
Core Features & Use Cases
- Automated Quality Gates: Implements scoring and validation for LLM-generated content at various pipeline phases.
- Evidence Score: Assigns a score (0-100) to assess the quality and grounding of LLM outputs, with defined actions for different score ranges (PASS, REVISE, REJECT).
- Multi-dimensional Quality Assessment: Evaluates outputs based on criteria like relevance, clarity, depth, bias, evidence, hallucination, and more.
- Use Case: Automatically verify the quality of job descriptions, candidate profiles, and interview questions generated by an LLM to ensure they meet predefined standards before proceeding to the next stage.
Quick Start
Use the quality-engine skill to verify the output quality of the LLM for phase P3 questions.
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: Agent Output Quality Verification Engine (품질 검증 엔진) Download link: https://github.com/sabyunrepo/IaaS/archive/main.zip#agent-output-quality-verification-engine Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.