unsloth-inference
CommunityAccelerate LLM inference and serving.
Authorcuba6112
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill optimizes the deployment of fine-tuned large language models for production inference, significantly improving speed and reducing VRAM usage.
Core Features & Use Cases
- Native Optimized Inference: Achieve 2x faster inference locally using
FastLanguageModel.for_inference(). - Production Serving: Merge LoRA weights for deployment with high-throughput engines like vLLM or SGLang.
- OpenAI-Compatible API: Easily serve models locally for drop-in replacement in existing applications.
- Use Case: Deploy a fine-tuned LLM for a customer support chatbot that needs to respond quickly and handle a high volume of user queries.
Quick Start
Load the fine-tuned model and run local optimized inference using the provided script.
Dependency Matrix
Required Modules
unslothtorchvllmsglang
Components
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: unsloth-inference Download link: https://github.com/cuba6112/skillfactory/archive/main.zip#unsloth-inference Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.