ai-llm-engineering
CommunityBuild, evaluate, and scale LLM systems for production.
Authorvasilyu1983
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Deploying and managing LLM systems in production involves complex challenges in architecture, evaluation, deployment, and safety. This Skill provides operational patterns and best practices for building robust LLM applications.
Core Features & Use Cases
- End-to-End LLM Lifecycle: Covers data preparation, fine-tuning (PEFT/LoRA), evaluation, deployment (vLLM), and LLMOps (monitoring, drift detection).
- Advanced Architectures: Design RAG pipelines, agentic workflows (ReAct, multi-agent orchestration), and prompt engineering strategies for complex tasks.
- Production-Ready Standards: Integrates modern advances like vLLM for 24x throughput, multi-layered security, and CI/CD-aligned evaluation for reliable systems.
Quick Start
Use the ai-llm-engineering skill to design a RAG pipeline for a customer support chatbot, including chunking and hybrid retrieval.
Dependency Matrix
Required Modules
None requiredComponents
referencesassets
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: ai-llm-engineering Download link: https://github.com/vasilyu1983/AI-Agents-public/archive/main.zip#ai-llm-engineering Please download this .zip file, extract it, and install it in the .claude/skills/ directory.