ollama-model-workflow
CommunityMaster local LLMs with Ollama.
Authormichaelalber
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill streamlines the entire lifecycle of managing local Large Language Models (LLMs) using Ollama, ensuring efficient selection, configuration, and deployment based on hardware constraints and performance benchmarks.
Core Features & Use Cases
- Hardware-Aware Model Selection: Guides users to choose models that fit their specific hardware (VRAM, RAM).
- Modelfile Management: Facilitates the creation, versioning, and parameter tuning of custom Modelfiles.
- Performance Benchmarking: Provides a framework for rigorously testing and comparing model performance (tokens/sec, TTFT).
- Use Case: A developer needs to select the best local LLM for code generation on their workstation with 16GB of VRAM. This Skill helps them assess hardware, choose appropriate models (e.g., a 7B or 13B parameter model), configure its Modelfile for coding tasks, and benchmark its speed and quality against alternatives.
Quick Start
Use the ollama-model-workflow skill to select and benchmark a coding model for a system with 12GB of VRAM.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ollama-model-workflow Download link: https://github.com/michaelalber/ai-toolkit/archive/main.zip#ollama-model-workflow Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.