azure-ai-contentsafety-py

Official

Detect harmful content in text and images.

AuthorMoonAxis
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill helps protect your applications and users by automatically detecting and flagging harmful or inappropriate content in both text and images, ensuring a safer online environment.

Core Features & Use Cases

  • Text Moderation: Analyze text for hate speech, self-harm, sexual content, and violence with configurable severity levels.
  • Image Moderation: Detect harmful content within images, including adult, violence, and gore.
  • Custom Blocklists: Create and manage custom lists of terms to block for domain-specific moderation needs.
  • Use Case: A social media platform can use this Skill to automatically scan user-generated posts and images, preventing the spread of offensive material and maintaining community standards.

Quick Start

Use the azure-ai-contentsafety-py skill to analyze the provided text for harmful content.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: azure-ai-contentsafety-py
Download link: https://github.com/MoonAxis/azure-stack/archive/main.zip#azure-ai-contentsafety-py

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.