trustworthy-experiments
CommunityRun trustworthy experiments with confidence.
Authorwdavidturner
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Trustworthy Experiments provides a framework to design, run, and interpret controlled experiments (A/B tests) so results are reliable, actionable, and not misled by common validity threats.
Core Features & Use Cases
- Planning and preregistration with a clear Evaluation Criterion (OEC) balancing success and guardrail metrics.
- Power analysis, sample-size estimation, and runtime guidance to achieve adequate sensitivity.
- SRM checks, replication, and guardrail monitoring to prevent false positives and long-term harm.
- Use Cases: A/B tests, feature pilots, gradual rollouts, and post-launch validation across product lines.
Quick Start
Use the included references and scripts to design a pre-registered experiment plan: fill out an experiment plan with the template, run sample_size.py for required sample size, and run srm_check.py on observed data to validate SRM before interpreting results.
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: trustworthy-experiments Download link: https://github.com/wdavidturner/product-skills/archive/main.zip#trustworthy-experiments Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.