uv-llama-cpp

Community

Run LLMs on any hardware, anywhere.

Authoruv-xiao
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill enables running large language models (LLMs) efficiently on a wide range of hardware, including CPUs, Apple Silicon, and non-NVIDIA GPUs, overcoming the limitations of traditional CUDA-dependent deployments.

Core Features & Use Cases

  • Cross-Platform Inference: Deploy LLMs on Macs, Linux, Windows, and edge devices without requiring NVIDIA hardware.
  • Optimized Performance: Leverages GGUF quantization for reduced memory footprint and significant speedups (4-10x faster than PyTorch on CPU).
  • Use Case: Deploy a chatbot on a local machine with an M3 Mac or an AMD GPU, or run an LLM on a Raspberry Pi for an embedded application, all without needing expensive NVIDIA hardware.

Quick Start

Use the uv-llama-cpp skill to run interactive chat with the llama-2-7b-chat.Q4_K_M.gguf model.

Dependency Matrix

Required Modules

llama-cpp-python

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: uv-llama-cpp
Download link: https://github.com/uv-xiao/pkbllm/archive/main.zip#uv-llama-cpp

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.