databricks-mlflow-evaluation
CommunityEvaluate and optimize GenAI agents with MLflow.
Software Engineering#prompt optimization#llm evaluation#evaluation#agent testing#mlflow#genai#trace analysis
AuthorAradhya0510
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill streamlines the evaluation of Generative AI agents and LLM applications, enabling rigorous quality assessment, debugging, and performance optimization.
Core Features & Use Cases
- Automated Evaluation: Run
mlflow.genai.evaluate()with built-in or custom scorers. - Trace Analysis: Debug agent behavior using detailed trace data.
- Prompt Optimization: Automatically improve prompts using GEPA with aligned judges.
- Production Monitoring: Continuously score live traffic with registered scorers.
- Use Case: Evaluate a RAG agent's groundedness and relevance, then use the aligned judge and GEPA to optimize its prompt for better accuracy and reduced hallucinations.
Quick Start
Use the databricks-mlflow-evaluation skill to evaluate my agent using the safety and correctness scorers.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: databricks-mlflow-evaluation Download link: https://github.com/Aradhya0510/databricks-cv-accelerator/archive/main.zip#databricks-mlflow-evaluation Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.