mi-experimenter
CommunityUnlock model insights with R_V analysis.
Software Engineering#mechanistic interpretability#activation patching#R_V#causal validation#transformer analysis#LLM interpretability
AuthorAmitabhainArunachala
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill automates complex mechanistic interpretability experiments, enabling researchers to understand how specific components of neural networks contribute to their behavior.
Core Features & Use Cases
- R_V Measurement: Quantify the representational quality of activations using the R_V metric.
- Causal Validation: Run controlled experiments to isolate the causal impact of specific model components (e.g., layers, attention heads) on behavior.
- Cross-Architecture Analysis: Compare R_V across different model families (GPT-2, Llama, Mistral) to find generalizable patterns.
- Use Case: Identify which MLP layers in a large language model are most critical for understanding factual recall by ablating them and measuring the resulting drop in R_V.
Quick Start
Use the mi-experimenter skill to run causal validation on the 'mistralai/Mistral-7B-v0.1' model targeting layer 27.
Dependency Matrix
Required Modules
torchnumpypandasscipytransformersacceleraterv-toolkit
Components
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: mi-experimenter Download link: https://github.com/AmitabhainArunachala/clawd/archive/main.zip#mi-experimenter Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.