lmms-eval-guide
OfficialNavigate LMMs-Eval codebase
Software Engineering#codebase guide#model integration#api server#lmms-eval#lmm evaluation#benchmark tasks
AuthorEvolvingLMMs-Lab
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill provides a comprehensive guide to understanding and interacting with the lmms-eval codebase, a powerful framework for evaluating Large Multimodal Models (LMMs). It simplifies complex workflows for developers and researchers.
Core Features & Use Cases
- Codebase Navigation: Understand the architecture and key components of
lmms-eval. - Workflow Guidance: Get step-by-step instructions for common tasks like adding models, tasks, or running evaluations.
- Debugging Assistance: Learn how to systematically debug pipeline failures.
- Use Case: A researcher wants to integrate a new LMM into
lmms-eval. They can use this Skill to find the correct files for model backend implementation, understand the registration process, and learn how to run a smoke test to verify their integration.
Quick Start
Use the lmms-eval-guide skill to understand how to add a new model backend to the lmms-eval codebase.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: lmms-eval-guide Download link: https://github.com/EvolvingLMMs-Lab/lmms-eval/archive/main.zip#lmms-eval-guide Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.