model-evaluator
CommunityRigorous ML model evaluation.
Data & Analytics#data science#machine learning#performance metrics#model evaluation#interpretability#bias audit
Authorinbharatai
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill addresses the critical need for thorough and systematic evaluation of machine learning models, ensuring their reliability, fairness, and performance.
Core Features & Use Cases
- Performance Metrics: Generates key metrics like accuracy, precision, recall, F1-score, and AUC.
- Visualizations: Creates confusion matrices and ROC curves for visual analysis.
- Bias Audits: Assesses models for fairness across different demographic groups.
- Interpretability: Provides insights into model predictions using techniques like SHAP.
- Use Case: After training a classification model, use this Skill to generate a comprehensive evaluation report including performance metrics, a confusion matrix, and an analysis of potential biases.
Quick Start
Evaluate the attached model using cross-validation and generate a confusion matrix.
Dependency Matrix
Required Modules
sklearnshap
Components
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: model-evaluator Download link: https://github.com/inbharatai/claude-skills/archive/main.zip#model-evaluator Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.