vllm-omni-serving

Community

Launch production-ready vLLM-Omni servers.

Authorhsliuustc0106
Version1.0.0
Installs0

System Documentation

What problem does it solve?

Setting up and maintaining production-grade vLLM-Omni API servers can be complex, error-prone, and hard to scale without clear guidance.

Core Features & Use Cases

  • Centralized guidance to launch and configure an OpenAI-compatible vLLM-Omni server for production workloads.
  • Supports multi-GPU setups, stage-based pipelines, GPU memory budgeting, and load-balancer deployments.
  • Use cases include serving multiple models behind a reverse proxy, performing health checks, and tuning resource usage for throughput.

Quick Start

Start a production-ready vLLM-Omni server using the vllm serve command with the --omni flag and your chosen model.

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: vllm-omni-serving
Download link: https://github.com/hsliuustc0106/vllm-omni-skills/archive/main.zip#vllm-omni-serving

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.