vllm-omni-quantization
CommunityReduce VRAM usage and speed up vLLM-Omni.
Authorhsliuustc0106
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Quantization reduces model memory footprint and increases inference throughput for vLLM-Omni, enabling efficient deployment on GPUs with limited VRAM.
Core Features & Use Cases
- Supports AWQ, GPTQ, and FP8 weight quantization to save memory and speed up autoregressive decoding.
- Guidance for serving pre-quantized models and selecting appropriate quantization modes based on hardware.
- Real-world usage includes fitting larger Omni models on fewer GPUs and lowering serving costs.
Quick Start
Quantize a base model with AWQ or GPTQ and start serving with the appropriate --quantization flag.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: vllm-omni-quantization Download link: https://github.com/hsliuustc0106/vllm-omni-skills/archive/main.zip#vllm-omni-quantization Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.