unsloth-gguf

Community

Export models to GGUF for local deployment.

Authorcuba6112
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill simplifies the process of exporting fine-tuned machine learning models into the GGUF format, making them compatible with popular local inference tools like llama.cpp and Ollama.

Core Features & Use Cases

  • GGUF Export: Converts trained models to the efficient GGUF format.
  • Quantization: Supports various quantization methods (e.g., q4_k_m, q8_0) to reduce model size and VRAM usage.
  • LoRA Merging: Automatically merges LoRA adapters into the base model during export.
  • Use Case: Deploy a fine-tuned LLM on your local machine for faster inference or to run it on hardware with limited VRAM.

Quick Start

Export the current model to GGUF format using the 'q4_k_m' quantization method.

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: unsloth-gguf
Download link: https://github.com/cuba6112/skillfactory/archive/main.zip#unsloth-gguf

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.