lm-evaluation-harness

Community

Benchmark LLMs with standardized 60+ tasks.

Authorovachiever
Version1.0.0
Installs0

System Documentation

What problem does it solve?

Provides a unified interface to evaluate language models across a broad suite of benchmarks (MMLU, GSM8K, HumanEval, etc.), enabling rigorous model comparison.

Core Features & Use Cases

  • Benchmark Suite: 60+ tasks including code, reasoning, and multilingual benchmarks
  • Model Compatibility: supports HuggingFace, vLLM, API-based models
  • Distributed / Parallel Runs: scalable evaluation on GPUs or CPUs
  • Custom Tasks: add private datasets and metrics

Quick Start

Evaluate a 7B HF model on GSM8K and HumanEval using 5-shot prompts.

Dependency Matrix

Required Modules

lm-evaltransformersvllm

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: lm-evaluation-harness
Download link: https://github.com/ovachiever/droid-tings/archive/main.zip#lm-evaluation-harness

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.