ruvector-ruvllm-wasm
CommunityBrowser LLM inference with WebGPU
Authorricable
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill enables running powerful Large Language Models (LLMs) directly within a web browser, even on edge devices, without needing a server.
Core Features & Use Cases
- Client-Side LLM Inference: Perform text generation, embedding, and streaming completions directly in the browser.
- WebGPU Acceleration: Leverages WebGPU for fast, hardware-accelerated inference.
- Quantized Models: Supports loading smaller, quantized models for efficient use of memory and bandwidth.
- Use Case: Build offline-capable AI chat applications, add text generation features to web apps, or deploy language models to resource-constrained edge devices.
Quick Start
Use the ruvector-ruvllm-wasm skill to generate text using the 'tinyllama-1.1b-q4' model in the browser.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ruvector-ruvllm-wasm Download link: https://github.com/ricable/cli-skills-builder/archive/main.zip#ruvector-ruvllm-wasm Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.