RLM REPL Environment
CommunityPersistent REPL for recursive reasoning.
AuthorMagic8Ballin
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill provides a persistent Python REPL environment that enables recursive prompting and batched sub-LM queries, helping teams reason over large inputs without losing context.
Core Features & Use Cases
- Persistent context is exposed via a global context variable containing the original input.
- The llm_query(prompt) function enables recursive delegation to sub-LMs for chunked analysis.
- Signaling completion with FINAL() or FINAL_VAR() ensures explicit, verifiable outputs.
- Print statements reveal intermediate results with truncation to support iterative reasoning.
- The workflow supports batching and error-handling guidance to manage large contexts safely.
Quick Start
Start the REPL, inspect the available tools, and use llm_query() to delegate tasks in batches. When a result is ready, call FINAL() or FINAL_VAR() to emit the answer.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: RLM REPL Environment Download link: https://github.com/Magic8Ballin/rlm-skills/archive/main.zip#rlm-repl-environment Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.