normalization-techniques

Community

Stabilize deep networks, accelerate training.

Authortachyon-beep
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill helps you choose the correct normalization technique (BatchNorm, LayerNorm, GroupNorm, InstanceNorm, RMSNorm) for your specific neural network architecture and batch size, preventing training instability, vanishing/exploding gradients, and slow convergence. It ensures your deep networks train efficiently and effectively.

Core Features & Use Cases

  • Technique Selection Guide: Understand when to use each normalization method based on architecture (CNN, RNN, Transformer, GAN) and batch size.
  • Problem Diagnosis: Identify and fix issues like BatchNorm failure with small batches or LayerNorm misuse in CNNs.
  • Use Case: Your deep Transformer model is unstable and won't converge. This skill guides you to use LayerNorm (or RMSNorm) with pre-norm placement, ensuring stable training and faster convergence.

Quick Start

My CNN is failing to train with a batch size of 4. What normalization technique should I use?

Dependency Matrix

Required Modules

torch

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: normalization-techniques
Download link: https://github.com/tachyon-beep/skillpacks/archive/main.zip#normalization-techniques

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository