aiconfig-online-evals
OfficialEvaluate AI Configs with built-in judges.
Authorlaunchdarkly-labs
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Automatically score AI Config responses using LLM-as-a-judge methodology to ensure accuracy, relevance, and safety across variations.
Core Features & Use Cases
- Built-in judges: accuracy, relevance, toxicity with 0.0-1.0 scoring.
- Async evaluation: results appear in the Monitoring tab after a brief delay.
- Works with AI Configs in completion mode; integrates via the LaunchDarkly UI for judge configuration.
Quick Start
Enable judges for your AI Config in the LaunchDarkly UI (AI Configs -> your config -> Variations -> Attach judges). Then run a Python-based completion request using the aiconfig-sdk and inspect the Monitoring tab for judge scores.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: aiconfig-online-evals Download link: https://github.com/launchdarkly-labs/aiconfigs-skills/archive/main.zip#aiconfig-online-evals Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.