groq-inference
CommunityUltra-fast GROQ LLM inference for real-time AI.
AuthorScientiaCapital
Version1.0.0
Installs0
System Documentation
What problem does it solve?
GROQ-based ultra-fast LLM inference enables real-time AI capabilities across chat, vision, and audio workflows without reliance on external OpenAI APIs.
Core Features & Use Cases
- Real-time chat inference with GROQ models for low latency requirements.
- Vision, OCR, STT, TTS, and tool-calling workflows in end-to-end pipelines.
- Reasoning and multi-model orchestration for responsive AI agents.
Quick Start
Configure GROQ_API_KEY in your environment and run a basic chat example to observe near-instant responses.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: groq-inference Download link: https://github.com/ScientiaCapital/skills/archive/main.zip#groq-inference Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.