collab-evals
CommunityRun collab evals and capture manifest evidence.
AuthorKbediako
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Collab-evals provide a framework to run repeatable multi-agent evaluation scenarios (symbolic RLM, large-context interactions) and to preserve evidence via manifest-backed outputs, reducing ad-hoc experimentation and enabling audit trails.
Core Features & Use Cases
- Orchestrates collab-driven evaluations across multi-agent workflows including symbolic RLM and large-context tests.
- Supports pause/resume, long-running experiments, and checkpointing for resilience.
- Generates manifest-backed evidence and updates documentation with findings for traceability and reproducibility.
Quick Start
- Pick the scenario(s) for evaluation:
- Large-context symbolic RLM with collab subcalls.
- Multi-hour refactor with checkpoints.
- 24h pause/resume context-rot regression.
- Multi-day initiative (48–72h) with multiple resumes.
- Ensure task context:
- export MCP_RUNNER_TASK_ID=<task-id>
- Run the scenario using codex-orchestrator start <pipeline> --format json and record the manifest path.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: collab-evals Download link: https://github.com/Kbediako/CO/archive/main.zip#collab-evals Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.