ai-stopping-hallucinations
CommunityGround AI outputs with citations.
Software Engineering#verification#citations#grounding#hallucination#faithfulness#retrieval-augmentation
Authorlebsral
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Ground AI outputs by enforcing source citations, verifying claims against documents, and applying retrieval-grounded checks to reduce hallucinations.
Core Features & Use Cases
- Citation enforcement: require inline citations for claims and validate that cited sources exist.
- Faithfulness verification: check that every claim is supported by the provided sources.
- Grounding via retrieval: retrieve relevant documents and constrain answers to the retrieved material.
- Self-check pattern: generate an answer and evaluate its faithfulness against the sources.
- Cross-check pattern: generate multiple independent answers and compare for consistency.
- Confidence gating: surface low-confidence results for human review when needed.
Quick Start
Provide source documents to the AI and require citation, grounding, and verification before presenting answers.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ai-stopping-hallucinations Download link: https://github.com/lebsral/DSPy-Programming-not-prompting-LMs-skills/archive/main.zip#ai-stopping-hallucinations Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.