Qwen-Ollama

Community

Run local LLM with Ollama and Qwen models.

Authorlawless-m
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill enables local LLM inference using Qwen models via Ollama, enabling analysis, summarization, code generation without cloud dependency.

Core Features & Use Cases

  • Local model inference via Ollama (HTTP API)
  • Simple JSON-based request/response patterns
  • Use of system prompts for consistent behavior
  • Timeout management for long-running tasks

Quick Start

Install Ollama, pull qwen2.5:7b, verify with ollama list, then use the client pattern to generate text.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: Qwen-Ollama
Download link: https://github.com/lawless-m/Gwen/archive/main.zip#qwen-ollama

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository