observability-llm-obs
OfficialMonitor LLMs and AI apps
Authorelastic
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill helps you monitor the performance, cost, and quality of your Large Language Models (LLMs) and agentic applications by analyzing data ingested into Elastic.
Core Features & Use Cases
- Performance Monitoring: Track latency, throughput, and error rates for LLM operations.
- Cost & Token Tracking: Analyze token usage and estimate costs associated with LLM calls.
- Response Quality: Identify issues related to response quality, content filtering, and errors.
- Workflow Orchestration: Analyze call chaining and agentic workflows to understand execution flow and identify bottlenecks.
- Use Case: You can use this Skill to answer questions like "What is the average token usage for our OpenAI calls yesterday?" or "Are there any LLM operations experiencing high latency or error rates?"
Quick Start
Use the observability-llm-obs skill to find the total input tokens used by the 'gpt-4' model in the last 24 hours.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: observability-llm-obs Download link: https://github.com/elastic/agent-skills/archive/main.zip#observability-llm-obs Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.