Searching protocol for "local-llm"
Keep sensitive tasks local to prevent data leakage.
Run local LLMs via Ollama/vLLM for fast generation.
Consult local LLM for quick insights.
Route queries to local LLMs offline.
Inline LLM for Neovim
Find the best local LLMs for your hardware.
Private AI for Home Assistant
Easily run local LLMs with Ollama from Python.
Run local LLMs with Ollama.
Local LLM inference in Go
Local LLM inference & management
Manage local LLMs with ease.