nnsight-remote-interpretability

Community

Explore neural network internals

AuthorDoanNgocCuong
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill provides a unified interface for interpreting and manipulating the internal workings of any PyTorch neural network, including massive models that cannot be run locally.

Core Features & Use Cases

  • Remote Execution: Run interpretability experiments on models up to 405B parameters without local GPU resources via NDIF.
  • Universal PyTorch Support: Works with any PyTorch architecture (Transformers, Mamba, custom models).
  • Mechanistic Interpretability: Access and modify activations, gradients, and attention patterns for deep model understanding.
  • Use Case: Analyze the internal representations of a 70B parameter model to understand how it processes specific linguistic phenomena, without needing a supercomputer.

Quick Start

Use the nnsight skill to trace the model 'gpt2' with the prompt 'Hello world' and save the hidden states from layer 5.

Dependency Matrix

Required Modules

nnsighttorch

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: nnsight-remote-interpretability
Download link: https://github.com/DoanNgocCuong/continuous-training-pipeline_T3_2026/archive/main.zip#nnsight-remote-interpretability

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.