write-judge-prompt
CommunityDesign LLM judges for subjective criteria.
Software Engineering#prompt engineering#llm-as-judge#llm#evaluation#judge prompt#subjective criteria
Authorhamelsmu
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill helps you create precise LLM-as-Judge evaluators for subjective criteria that are difficult or impossible to check with automated code-based methods.
Core Features & Use Cases
- Subjective Evaluation: Design judges for criteria like tone, faithfulness, relevance, and completeness.
- Binary Pass/Fail: Enforces strict binary outcomes for clear evaluation.
- Structured Output: Ensures judges provide a detailed critique before their verdict.
- Use Case: You need to evaluate if an AI assistant's response to a customer query has the appropriate empathetic tone. This skill guides you in creating a judge that can assess this subjective quality.
Quick Start
Use the write-judge-prompt skill to design a judge for evaluating the tone of customer service emails.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: write-judge-prompt Download link: https://github.com/hamelsmu/evals-skills/archive/main.zip#write-judge-prompt Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.