mlx-brain
CommunityRun LLMs locally on macOS with MLX.
Authorkjaylee
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill enables users to run large language models (LLMs) directly on their macOS devices using the MLX framework, bypassing the need for cloud-based services and offering local control over AI model execution.
Core Features & Use Cases
- Local LLM Execution: Leverages Apple's MLX framework for efficient on-device AI model inference.
- Model Variety: Supports different models like Qwen2.5 for general tasks and Qwen2.5-Coder for coding assistance.
- Use Case: A developer can use this Skill to quickly test code generation prompts or get explanations for code snippets without sending sensitive information to external servers.
Quick Start
Execute the MLX Brain skill to get a response to the prompt "What is MLX?" using the default Qwen model.
Dependency Matrix
Required Modules
None requiredComponents
scripts
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: mlx-brain Download link: https://github.com/kjaylee/misskim-skills/archive/main.zip#mlx-brain Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.