gpu-keepalive-with-keepgpu

Community

Keep GPUs alive without heavy jobs.

AuthorWangmerlyn
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill prevents shared GPUs from being reclaimed or silently shared during preparatory tasks like data preprocessing, debugging, or multi-stage pipeline coordination, ensuring your GPU resources remain available.

Core Features & Use Cases

  • Resource Reservation: Allocates minimal VRAM and issues lightweight CUDA work to signal an "active" device to schedulers.
  • Polite Resource Usage: Uses NVML to monitor utilization and backs off when the GPU is actively in use by another process.
  • Flexible Operation: Supports both blocking CLI mode for manual control and non-blocking service mode for agent workflows.
  • Use Case: When running a long data preprocessing job on a shared cluster, use this Skill to ensure your allocated GPU isn't taken by another user or process while you wait.

Quick Start

Install KeepGPU and start a non-blocking keep-alive session for GPU 0, holding 1GiB of VRAM and backing off if utilization exceeds 25%, by running pip install keep-gpu then keep-gpu start --gpu-ids 0 --vram 1GiB --busy-threshold 25.

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: gpu-keepalive-with-keepgpu
Download link: https://github.com/Wangmerlyn/KeepGPU/archive/main.zip#gpu-keepalive-with-keepgpu

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.