flash-attention

Community

Fast, efficient attention backends for ML.

Authortylertitsworth
Version1.0.0
Installs0

System Documentation

What problem does it solve?

Enables selecting and configuring high-performance attention backends (FlashAttention 2/3, SDPA, PagedAttention, Ring Attention) for ML workloads on modern GPUs, reducing memory footprint and increasing throughput.

Core Features & Use Cases

  • Backend landscape overview for different GPUs, dtypes, and head dimensions.
  • Guidance on selecting between FA2, FA3, SDPA, and memory-efficient options like PagedAttention and Ring Attention based on workload (training vs inference) and hardware.
  • Practical integration tips with PyTorch and Hugging Face transformers to control the attention backend at runtime.

Quick Start

To begin, identify your GPU architecture and desired throughput and configure the appropriate attention backend in your training script or inference pipeline.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: flash-attention
Download link: https://github.com/tylertitsworth/skills/archive/main.zip#flash-attention

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.