saga-hallucination-detector
CommunityDetects and prevents AI hallucinations.
Software Engineering#hallucination detection#output validation#ai safety#reasoning quality#llm monitoring#real-time detection
Authormonkey1sai
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill acts as a real-time guardian against AI hallucinations, ensuring the accuracy and reliability of AI-generated content by detecting reasoning errors and deviations.
Core Features & Use Cases
- Real-time Detection: Monitors AI reasoning step-by-step and across the entire output prefix for inconsistencies.
- Dual-Layer Analysis: Employs both step-level confidence scoring and prefix-level trend analysis.
- Adaptive Thresholds: Dynamically adjusts detection sensitivity based on performance metrics.
- Use Case: When an AI agent is generating a complex report, this Skill can flag any steps where the logic becomes inconsistent or the output starts to drift from the established context, preventing the propagation of misinformation.
Quick Start
Activate the saga-hallucination-detector skill to monitor the current AI reasoning process for any signs of hallucination.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: saga-hallucination-detector Download link: https://github.com/monkey1sai/jacks_happy_bots/archive/main.zip#saga-hallucination-detector Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.