reflect-calibration

Community

Calibrate AI confidence levels.

Authorzkysar1
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This skill addresses the challenge of unreliable confidence scores in AI-generated hypotheses, ensuring more accurate self-assessment and improved decision-making.

Core Features & Use Cases

  • Confidence Binning: Groups hypotheses by confidence intervals (e.g., 70-79%).
  • Accuracy Calculation: Computes actual accuracy within each confidence bin.
  • Self-Consistency Check: Recommends methods for verifying hypothesis accuracy through multiple independent assessments.
  • Data Updates: Persists calibration findings to improve future performance.
  • Use Case: After generating 50 hypotheses, this skill analyzes how often hypotheses with 90%+ confidence were actually correct, identifying potential over or under-confidence in the AI's self-assessment.

Quick Start

Run the reflect calibration check to analyze hypothesis accuracy.

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: reflect-calibration
Download link: https://github.com/zkysar1/Claude-Skills-Continual-Learning-Base/archive/main.zip#reflect-calibration

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.