check-test-logical-resp-claude
CommunityFind logical gaps in passed test results.
System Documentation
What problem does it solve?
The check-test-logical-resp-claude skill identifies and surfaces logical inconsistencies in model responses that passed automated tests. It does not re-check factual assertions; instead it surfaces semantic mismatches where a response seems off-topic or contradicts the prompt. It resolves analyze-test-json style files like test-results_.json and multi-model-results_.json under src/IntegrationTesterApp/test-results/.
Core Features & Use Cases
- Validate that model responses align with the prompt's topic, entities, and intent.
- Identify passed tests where the response is semantically illogical or off-topic.
- Resolve test-result JSON files (test-results_.json or multi-model-results_.json) using the same file resolution approach as analyze-test-json skill.
Quick Start
Use the check-test-logical-resp-claude skill to audit a test-results JSON file and identify any passed tests with illogical responses.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: check-test-logical-resp-claude Download link: https://github.com/dezverev/AnimalAL-v1/archive/main.zip#check-test-logical-resp-claude Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.