local-llm-provider

Community

Run LLMs locally, with cloud fallback.

Authorwinsorllc
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill allows you to run Large Language Models (LLMs) directly on your local machine, offering enhanced privacy, cost savings, and offline capabilities, with an automatic fallback to cloud providers if local inference fails.

Core Features & Use Cases

  • Local LLM Inference: Connect to Ollama, llama.cpp, or vLLM servers for private and cost-effective AI tasks.
  • Model Flexibility: Use a wide range of models, including those not available via cloud APIs.
  • Automatic Fallback: Seamlessly switches to cloud providers (like Anthropic or OpenAI) if local endpoints are unavailable or fail.
  • Use Case: You need to process sensitive customer data locally for privacy reasons, using a fine-tuned Llama 3 model. This Skill ensures the task completes even if your local Ollama server is temporarily down by falling back to a cloud-based Claude model.

Quick Start

Use the local-llm-provider skill to query a local model with the prompt "What is the capital of France?".

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: local-llm-provider
Download link: https://github.com/winsorllc/upgraded-carnival/archive/main.zip#local-llm-provider

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.