vllm

Community

High-throughput LLM inference on Kubernetes

Authortylertitsworth
Version1.0.0
Installs0

System Documentation

What problem does it solve?

vLLM enables high-throughput, memory-efficient LLM inference on GPUs, enabling scalable deployment of large models with advanced memory management techniques such as PagedAttention.

Core Features & Use Cases

  • High-throughput inference with configurable tensor and pipeline parallelism, multi-GPU deployment, and efficient KV cache management.
  • OpenAI-compatible API support for chat, completions, and embeddings, with optional LoRA adapters, speculative decoding, and structured outputs.
  • Use cases include production model serving, experimentation, and benchmarking in cloud-native environments.

Quick Start

Start the server with vllm serve using your model ID to begin HTTP API access.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: vllm
Download link: https://github.com/tylertitsworth/skills/archive/main.zip#vllm

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.