captum
OfficialUnderstand PyTorch model behavior.
Software Engineering#compliance#risk assessment#pytorch#explainable AI#interpretability#model understanding
AuthorDTMC-marketplace
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill helps developers and researchers understand the inner workings of PyTorch models, enabling better debugging, compliance, and risk assessment.
Core Features & Use Cases
- Model Interpretability: Apply various attribution methods (Integrated Gradients, DeepLIFT, GradCAM) to understand feature importance.
- Compliance Assessment: Evaluate AI systems against EU AI Act Art. 13 requirements by analyzing model behavior.
- Risk Mitigation: Identify potential biases or vulnerabilities in models through interpretability analysis.
- Use Case: Debugging why a PyTorch image classification model misclassifies certain images by visualizing which parts of the image contributed most to the incorrect prediction.
Quick Start
Use the captum skill to analyze feature importance for the attached model 'resnet50.pth' on the input 'sample_image.jpg'.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: captum Download link: https://github.com/DTMC-marketplace/governance/archive/main.zip#captum Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.