llm-evaluate

Official

Find the best LLMs by price and performance.

Authorlucidlabs-hq
Version1.0.0
Installs0

System Documentation

What problem does it solve?

Evaluates large language model options for cost and performance to help teams pick the right provider for a given use case.

Core Features & Use Cases

  • Pricing-aware evaluation across major providers (Anthropic, OpenAI, Google, DeepSeek, Mistral, xAI)
  • Performance and features scoring based on latency, context window, and capabilities relevant to chat, document analysis, or coding tasks
  • Use Case Mapping to align model choice with application needs such as chatbots, document processing, or code generation

Quick Start

Describe your use case and run the skill to receive prioritized model recommendations based on price, performance, and context.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: llm-evaluate
Download link: https://github.com/lucidlabs-hq/agent-kit/archive/main.zip#llm-evaluate

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.