llm-prompting
CommunityMaster LLM prompting patterns and safety.
Software Engineering#evaluation#governance#prompting#llm-client#llm-prompting#schema-enforcement#category-calibration
Authoruabbasi
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill provides expert guidance on LLM prompting infrastructure, patterns, and conventions used across the project's data-pipeline and evaluation components, enabling consistent design and safer, more reliable outputs.
Core Features & Use Cases
- Versioned Prompt System: frontmatter-based prompts with version tracking, hashing, and load/validate utilities.
- LLM Client Architecture: multi-provider routing and task-based model selection with deterministic fallbacks.
- Schema Enforcement: JSON validation via Pydantic models to guarantee structured outputs.
- Category Calibration: domain-specific benchmarks injected into prompts to improve calibration.
- Robust Quality Gating: evaluation and judge modules to ensure output reliability and governance.
Quick Start
Follow this guide to implement versioned prompts, category calibration, and schema enforcement in your LLM tooling.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: llm-prompting Download link: https://github.com/uabbasi/good-measure-giving/archive/main.zip#llm-prompting Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.