English | 简体中文
gflow is a lightweight scheduler for a single Linux machine. It brings a Slurm-like workflow to shared GPU workstations and lab servers without cluster setup.
- Queue and run jobs on one machine.
- Submit commands or scripts with GPUs, time limits, dependencies, arrays, and priorities.
- Inspect, attach, cancel, and recover jobs with a small CLI.
Requirements: Linux, tmux, and NVIDIA drivers only if you need GPU scheduling.
Install with Python tooling:
uv tool install runqd
# or
pipx install runqd
# or
pip install runqdInstall with Cargo:
cargo install gflowNightly build:
pip install --index-url https://test.pypi.org/simple/ runqdgflowd init
gflowd up
gbatch --gpus 1 --name demo bash -lc 'echo "hello from gflow"; sleep 30'
gqueue
gjob show <job_id>
gflowd downgflow can also run as a local MCP server for Claude Desktop, Claude Code, Codex, Cursor, and similar tools:
gflow mcp serveKeep gflowd running on the same machine. MCP clients start gflow mcp serve as a local stdio server.
Claude Desktop example:
Claude Code:
claude mcp add --scope user gflow -- gflow mcp serveCodex:
codex mcp add gflow -- gflow mcp serveOr via ~/.codex/config.toml:
[mcp_servers.gflow]
command = "gflow"
args = ["mcp", "serve"]If gflow is not on your PATH, replace it with the absolute binary path.
Most usage details live in the docs:
Please open an Issue or Pull Request.
MIT. See LICENSE.