vllm-server

Community

High-throughput LLM inference

AuthorBagelHole
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill enables the deployment and management of vLLM for high-throughput LLM inference, optimizing serving performance for production environments.

Core Features & Use Cases

  • Deploy LLMs: Serve open-source LLMs like Llama, Mistral, and Gemma.
  • OpenAI-Compatible API: Expose self-hosted models via a familiar API endpoint.
  • Performance Optimization: Configure continuous batching, tensor parallelism, and quantization for reduced latency and increased throughput.
  • Use Case: Deploy a Llama-3.1-70B-Instruct model with tensor parallelism across two GPUs, serving requests via an OpenAI-compatible API for a customer-facing application.

Quick Start

Serve the meta-llama/Llama-3.1-8B-Instruct model using vLLM with an OpenAI-compatible API on port 8000.

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: vllm-server
Download link: https://github.com/BagelHole/DevOps-Security-Agent-Skills/archive/main.zip#vllm-server

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.