nemo-guardrails
CommunitySecure LLM apps with programmable safety.
Software Engineering#hallucination detection#prompt injection#guardrails#pii filtering#llm security#runtime safety#toxicity detection
AuthorDoanNgocCuong
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill provides a robust framework to ensure the safety, reliability, and ethical behavior of Large Language Model (LLM) applications at runtime.
Core Features & Use Cases
- Runtime Safety: Implements guardrails to prevent harmful outputs, detect prompt injections, and validate inputs/outputs.
- Content Moderation: Features include PII filtering, toxicity detection, and hallucination checking.
- Use Case: Integrate this Skill into a customer-facing chatbot to automatically filter out sensitive personal information, block inappropriate user requests, and ensure the LLM's responses are factual and non-toxic, thereby protecting users and maintaining brand integrity.
Quick Start
Install the nemoguardrails library by running 'pip install nemoguardrails'.
Dependency Matrix
Required Modules
nemoguardrails
Components
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: nemo-guardrails Download link: https://github.com/DoanNgocCuong/continuous-training-pipeline_T3_2026/archive/main.zip#nemo-guardrails Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.