rasa-configuring-model-groups
OfficialConfigure LLM and embedding providers.
AuthorRasaHQ
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill simplifies the complex task of configuring model groups in endpoints.yml, enabling seamless integration and routing for various LLM and embedding providers within Rasa.
Core Features & Use Cases
- Provider Configuration: Set up LLM and embedding providers (OpenAI, Azure, self-hosted, etc.) in
endpoints.yml. - Multi-Deployment Routing: Configure strategies like least-busy or latency-based routing for multiple model deployments within a single group.
- Caching & Failover: Implement response caching and define failover mechanisms for robust model group operation.
- Use Case: When setting up a new Rasa project that requires routing requests to multiple LLM instances for load balancing, use this Skill to define the
model_groupsin yourendpoints.yml.
Quick Start
Configure a model group for OpenAI embeddings in your endpoints.yml file.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: rasa-configuring-model-groups Download link: https://github.com/RasaHQ/rasa-agent-skills/archive/main.zip#rasa-configuring-model-groups Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.