vllm-installer
CommunitySet up vLLM on NVIDIA GPUs with ease.
Authoryangwhale
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This skill provides a turnkey guide to install, configure, and diagnose vLLM on NVIDIA GPUs, removing guesswork and reducing setup time.
Core Features & Use Cases
- End-to-end installation: CUDA, PyTorch, vLLM, FlashInfer, and KV transfer components.
- Environment validation and debugging: LSSD mount checks, LD_LIBRARY_PATH setup, and DeepEP readiness for MoE models.
- Server deployment and testing: Launch an OpenAI-compatible vLLM API server and verify model endpoints.
- Troubleshooting and maintenance: Diagnose common issues and re-run the diagnostic workflow.
Quick Start
source ./scripts/setup_env.sh pip install vllm==0.14.1 flashinfer-python==0.5.3 flashinfer-cubin==0.5.3 pip install nvidia-nccl-cu12==2.28.3 nvidia-cudnn-cu12==9.16.0.29 vllm serve Qwen/Qwen2.5-7B-Instruct --tensor-parallel-size 4 --port 8000 --host 0.0.0.0
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: vllm-installer Download link: https://github.com/yangwhale/gpu-tpu-pedia/archive/main.zip#vllm-installer Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.