vllm-deployment
OfficialDeploy vLLM with Docker/GPU for fast AI inference.
Authorstakpak
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Deploys vLLM models for high-performance inference across CPU, GPU, and cloud environments.
Core Features & Use Cases
- Docker CPU/GPU deployments to run stable LLM workloads
- Cloud VM provisioning with OpenAI-compatible API endpoints
- Hardware assessment guidance and model deployment workflow for scalable inference
Quick Start
Follow the Docker or cloud VM deployment steps to run vLLM and expose an OpenAI-compatible API.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: vllm-deployment Download link: https://github.com/stakpak/community-paks/archive/main.zip#vllm-deployment Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.