edit-llm-inference-style
CommunityStandardize LLM prompts for reliable inference.
Authoranhvth
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Speedy_utils users struggle to consistently configure prompts and handle model outputs across different inference styles; this skill provides a standardized approach to building prompts, applying chat templating, reasoning prefixes, and safe stopping rules to improve reliability and evaluation.
Core Features & Use Cases
- Chat templating: Apply a consistent chat format to derive predictable prompts across models.
- Reasoning prefixes: Enforce a <think> style prefix to separate reasoning from final answers.
- Stop sequences & boxed-outputs: Stop generations on boxed answers or end tokens to simplify parsing and evaluation.
- Compatibility guidance: Works with transformers-based tokenizers and a configured LLM instance to ensure smooth integration.
Quick Start
Configure your LLM pipeline to apply the provided chat template, reasoning prefix, and stop-sequence rules when generating outputs.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: edit-llm-inference-style Download link: https://github.com/anhvth/speedy_utils/archive/main.zip#edit-llm-inference-style Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.