yzma

Community

Local LLM inference in Go

Authorczyt
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill simplifies the integration of local Large Language Models (LLMs) into Go applications by providing clear guidance on using the yzma library, which acts as a wrapper for llama.cpp.

Core Features & Use Cases

  • Local LLM Integration: Run LLMs directly on your machine without external servers.
  • Hardware Acceleration: Leverage CUDA, Metal, Vulkan, etc., for faster inference.
  • Model Management: Load and configure GGUF models, handle context, and tune parameters.
  • Use Case: Develop a Go application that uses a local LLM to summarize user-provided text, powered by yzma and a downloaded GGUF model.

Quick Start

Use the yzma skill to install the yzma CLI tool and download the llama.cpp libraries for CUDA acceleration.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: yzma
Download link: https://github.com/czyt/claude-skills/archive/main.zip#yzma

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.