token-usage-optimization

Community

Optimize LLM token usage and cost.

Authorfbosch
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill helps teams estimate and reduce token usage, cost, and latency for prompts, agents, and tool workflows, enabling faster iterations and more predictable performance.

Core Features & Use Cases

  • Token estimation for prompt components such as system, developer, tool lists, history, retrieval, and user input.
  • An optimization playbook that trims inputs, reduces retrieval size, enables prompt caching, and enforces output length caps.
  • Suitable for budgeting, model selection, and troubleshooting context length issues across AI tasks, agents, and tool-based conversations.

Quick Start

Provide your target prompt and budget, then run the estimator to obtain token counts and a prioritized optimization plan.

Dependency Matrix

Required Modules

tokenx

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: token-usage-optimization
Download link: https://github.com/fbosch/dotfiles/archive/main.zip#token-usage-optimization

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.