prompt-injection-guard
OfficialSecure AI from prompt injection.
Software Engineering#security#input validation#prompt injection#output validation#ai safety#llm security
Authorlatestaiagents
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill protects AI applications from malicious prompt injection attacks, ensuring the integrity and security of AI-driven systems.
Core Features & Use Cases
- Input Validation: Detects and sanitizes potentially harmful user inputs before they reach the LLM.
- Output Validation: Verifies that the LLM's output does not contain sensitive information or unintended instructions.
- Defense Strategies: Implements layered security including blocklists, prompt structuring, canary tokens, and LLM-based detection.
- Use Case: When building a customer-facing chatbot that processes user queries, use this skill to prevent users from manipulating the chatbot into revealing sensitive data or performing unauthorized actions.
Quick Start
Use the prompt-injection-guard skill to validate and sanitize user input before processing it with an LLM.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: prompt-injection-guard Download link: https://github.com/latestaiagents/agent-skills/archive/main.zip#prompt-injection-guard Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.