reviewer:challenge

Community

Critique and improve AI responses.

Authoratournayre
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill addresses the need for objective self-assessment of AI-generated responses, ensuring quality, accuracy, and adherence to user intent.

Core Features & Use Cases

  • Structured Evaluation: Provides a detailed, criteria-based scoring of AI responses.
  • Actionable Feedback: Identifies specific areas for improvement and suggests concrete revisions.
  • Use Case: After an AI generates a complex code explanation, use this Skill to evaluate its clarity, accuracy, and completeness, receiving a score and suggestions for a more understandable version.

Quick Start

Use the reviewer:challenge skill to evaluate the last AI response based on clarity, relevance, and completeness.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: reviewer:challenge
Download link: https://github.com/atournayre/claude-personas/archive/main.zip#reviewer-challenge

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.