ai-llm-ops-inference
CommunityOptimize LLM inference for speed and cost efficiency.
Authorvasilyu1983
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Running LLM inference in production is resource-intensive and costly. This Skill provides operational patterns to optimize performance, reduce latency, and cut costs for LLM serving.
Core Features & Use Cases
- High-Throughput Serving: Leverage vLLM with continuous batching and PagedAttention for up to 24x throughput gains.
- Cost Reduction: Implement FP8/FP4 quantization for 30-50% cost savings while maintaining model accuracy.
- Advanced Optimization: Utilize FlashInfer kernels, speculative decoding, and KV cache optimization for superior latency and memory efficiency.
Quick Start
Use the ai-llm-ops-inference skill to configure vLLM for a high-throughput LLM API, focusing on continuous batching and PagedAttention.
Dependency Matrix
Required Modules
None requiredComponents
referencesassets
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ai-llm-ops-inference Download link: https://github.com/vasilyu1983/AI-Agents-public/archive/main.zip#ai-llm-ops-inference Please download this .zip file, extract it, and install it in the .claude/skills/ directory.