experiment-audit
CommunityEnsure ML experiment integrity.
Authorihmorol
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill rigorously validates Machine Learning experiments, ensuring correctness, reproducibility, and integrity, thereby preventing costly errors and building trust in results.
Core Features & Use Cases
- Comprehensive Validation: Audits data integrity, pipeline execution, metric correctness, and reproducibility.
- Leakage Detection: Specifically checks for and identifies data leakage across splits and through preprocessing.
- Use Case: A research team has just completed a complex ML experiment. They need to ensure their findings are robust and reproducible before publication. This Skill will systematically review their experiment setup, data handling, and results to provide a confidence score and identify any potential issues.
Quick Start
Run a full audit on the experiment located in the '/path/to/experiment/results' directory.
Dependency Matrix
Required Modules
None requiredComponents
referencesscripts
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: experiment-audit Download link: https://github.com/ihmorol/unsw-nb15-handling-binary-multiclass-ids/archive/main.zip#experiment-audit Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.