testing-methodologies
CommunityStructured AI security testing, repeatable.
System Documentation
What problem does it solve?
This Skill provides a structured, repeatable approach to AI security testing across the entire lifecycle—from reconnaissance and threat modeling to vulnerability assessment, exploitation, and reporting—so teams can identify and remediate risks efficiently.
Core Features & Use Cases
- Threat Modeling: Apply frameworks like STRIDE, threat trees, and MITRE ATLAS mappings to identify and prioritize AI-system risks.
- Vulnerability Testing: Systematically test input handling, output safety, model robustness, and access control with predefined categories and artifacts.
- Exploitation & Reporting: Develop PoCs, assess impact, and generate comprehensive security reports with remediation roadmaps for governance and compliance.
- Use Case: A data science team uses this methodology to conduct a full security assessment of an AI assistant before deployment, ensuring controls are in place.
Quick Start
Use the testing-methodologies skill to generate a full security testing plan for an AI assistant. Then review the included threat modeling templates and test plans to customize for your environment. If needed, adapt the scope to reconnaissance, threat modeling, vulnerability testing, exploitation, and reporting.
Dependency Matrix
Required Modules
None requiredComponents
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: testing-methodologies Download link: https://github.com/pluginagentmarketplace/custom-plugin-ai-red-teaming/archive/main.zip#testing-methodologies Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.