self-evaluation-patterns

Community

Enhance AI output quality and reliability.

Authormsageha
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill provides a structured framework for AI agents to perform self-evaluation before finalizing output, ensuring accuracy, completeness, and adherence to scope, thereby reducing errors and improving overall quality.

Core Features & Use Cases

  • Completion Checklist: A step-by-step verification process against acceptance criteria, scope, consistency, build status, and side effects.
  • Quality Metrics: Assesses output based on accuracy, completeness, and consistency.
  • Failure Detection: Identifies early triggers for potential failures like premise collapse or scope creep.
  • Uncertainty Levels: Quantifies the confidence in the output, from "Certain" to "Unable to Judge".
  • Role-Specific Application: Tailors checks for Orchestrator, Planner, and Worker roles.
  • Use Case: A worker agent completing a coding task will use this skill to verify its code against requirements, check for unintended side effects, and ensure it builds successfully before reporting completion.

Quick Start

Use the self-evaluation-patterns skill to check if the generated code meets all acceptance criteria and has no unintended side effects.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: self-evaluation-patterns
Download link: https://github.com/msageha/maestro_v2/archive/main.zip#self-evaluation-patterns

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.