ollama-rag
CommunityBuild RAG systems with Ollama.
Authorcuba6112
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill simplifies the process of building Retrieval Augmented Generation (RAG) systems by leveraging Ollama for both local and cloud-based Large Language Models (LLMs) and embedding models.
Core Features & Use Cases
- Local & Cloud LLMs: Utilize powerful models like DeepSeek-V3.2 (GPT-5 level) or Qwen3-Coder (1M context) via Ollama, with or without local hardware.
- RAG Frameworks: Integrates seamlessly with LangChain and LlamaIndex for document Q&A, knowledge bases, and agentic RAG.
- Embedding Models: Supports various embedding models for accurate document retrieval.
- Use Case: Quickly set up a RAG system to answer questions from a large codebase or a collection of technical documents using a local Ollama model.
Quick Start
Use the ollama-rag skill to build a RAG system using LangChain and a local 'nemotron-3-nano' model.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ollama-rag Download link: https://github.com/cuba6112/skillfactory/archive/main.zip#ollama-rag Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.