llm-app-architecture
CommunityBuild robust, async LLM apps with confidence.
Authorricardoroche
Version1.0.0
Installs0
System Documentation
What problem does it solve?
When building LLM-powered applications, follow architecture patterns for reliable async calls, streaming responses, token management, retry logic, and robust error handling.
Core Features & Use Cases
- Async LLM Calls: Pattern for non-blocking model calls with proper error handling.
- Streaming Responses: Streaming token-by-token responses to clients.
- Token Counting & Management: Track and manage tokens and costs.
Quick Start
Create an async LLM client and endpoint that streams responses for a given prompt, with proper error handling and token accounting.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: llm-app-architecture Download link: https://github.com/ricardoroche/ricardos-claude-code/archive/main.zip#llm-app-architecture Please download this .zip file, extract it, and install it in the .claude/skills/ directory.