ai-prompt-injection

Community

Secure AI/LLM APIs against manipulation.

Authordevtint
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill identifies and tests for vulnerabilities in AI and LLM-powered APIs that could allow attackers to manipulate model behavior, bypass safety controls, or extract sensitive information.

Core Features & Use Cases

  • Direct Prompt Injection: Tests for basic instruction overrides and role-playing attacks.
  • System Prompt Extraction: Attempts to reveal the AI's underlying instructions and configuration.
  • Indirect Injection: Simulates attacks where malicious content in documents or web pages influences AI behavior.
  • Data Exfiltration & Jailbreaking: Explores methods to extract data or bypass AI safety mechanisms.
  • Use Case: When testing a new chatbot interface, use this Skill to ensure users cannot trick the AI into revealing confidential system prompts or performing unauthorized actions.

Quick Start

Run the provided Python script against the target AI endpoint to test for basic prompt injection vulnerabilities.

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: ai-prompt-injection
Download link: https://github.com/devtint/API_PENTEST/archive/main.zip#ai-prompt-injection

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.