ai-security-hardening
CommunitySecure AI deployments.
Software Engineering#ai security#prompt injection#api security#data exfiltration#llm security#model hardening
AuthorBagelHole
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill protects AI and LLM deployments from critical security vulnerabilities like prompt injection, data exfiltration, and model theft, ensuring the integrity and safety of your AI systems.
Core Features & Use Cases
- Prompt Injection Defense: Implements input sanitization and guardrails to prevent malicious prompt manipulation.
- Data Exfiltration Prevention: Includes output filtering and PII scrubbing to protect sensitive information.
- API Security: Provides examples for rate limiting, token verification, and input validation for LLM API endpoints.
- Model Weight Security: Details methods for verifying model integrity and scanning for malware.
- Use Case: Secure a customer-facing chatbot by preventing users from tricking the LLM into revealing sensitive system information or executing unintended commands.
Quick Start
Use the ai-security-hardening skill to sanitize user input before sending it to the LLM.
Dependency Matrix
Required Modules
presidio-analyzerpresidio-anonymizernemoguardrailsPyJWTstructlog
Components
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ai-security-hardening Download link: https://github.com/BagelHole/DevOps-Security-Agent-Skills/archive/main.zip#ai-security-hardening Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.