uv-tensorrt-llm
CommunityAccelerate LLM inference on NVIDIA GPUs
Software Engineering#performance optimization#quantization#serving#llm inference#tensorrt-llm#nvidia gpu
Authoruv-xiao
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill optimizes Large Language Model (LLM) inference for production environments on NVIDIA GPUs, drastically reducing latency and increasing throughput compared to standard frameworks like PyTorch.
Core Features & Use Cases
- High-Performance Inference: Achieve 10-100x faster inference speeds for LLMs on NVIDIA hardware.
- Production Deployment: Ideal for serving models with features like quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.
- Use Case: Deploying a Llama 3-70B model for a real-time customer support chatbot, requiring sub-100ms response times even under heavy load.
Quick Start
Use the uv-tensorrt-llm skill to serve the meta-llama/Meta-Llama-3-8B model with tensor parallelism across 4 GPUs.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: uv-tensorrt-llm Download link: https://github.com/uv-xiao/pkbllm/archive/main.zip#uv-tensorrt-llm Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.