mlflow-evaluation

Community

Evaluate GenAI agents with MLflow.

AuthorLaurentPRAT-DB
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill helps you rigorously evaluate the quality, safety, and performance of your Generative AI agents and LLM applications, ensuring they meet your project's standards before deployment.

Core Features & Use Cases

  • Automated Evaluation: Run evaluations using built-in or custom scorers.
  • Trace Analysis: Debug agent behavior by analyzing execution traces.
  • Production Monitoring: Set up continuous quality checks on live traffic.
  • Use Case: You've built a customer support chatbot. Use this Skill to automatically evaluate its responses for safety, accuracy against expected answers, and adherence to brand guidelines, then monitor its performance in production.

Quick Start

Use the mlflow-evaluation skill to evaluate your agent using the provided test dataset and safety and guidelines scorers.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: mlflow-evaluation
Download link: https://github.com/LaurentPRAT-DB/LPT_claude_config/archive/main.zip#mlflow-evaluation

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.