vllm-omni-hardware
CommunityConfigure and optimize vLLM-Omni across backends.
Authorhsliuustc0106
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Configuring vLLM-Omni across multiple hardware backends can be complex and error-prone, requiring careful setup and validation.
Core Features & Use Cases
- Backend-agnostic setup guides for CUDA, ROCm, NPU, and XPU
- Device placement checks and performance tuning for reliable deployments
- Use cases include multi-backend inference services with consistent visibility
Quick Start
Install the vLLM-Omni hardware backend you plan to use (CUDA, ROCm, NPU, or XPU) and begin validating device visibility and basic performance.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: vllm-omni-hardware Download link: https://github.com/hsliuustc0106/vllm-omni-skills/archive/main.zip#vllm-omni-hardware Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.