Skip to content

AndPuQing/gflow

gflow

Documentation Status GitHub Actions Workflow Status codecov PyPI - Version TestPyPI - Version Crates.io Version PyPI - Downloads dependency status Crates.io License Crates.io Size Discord

English | 简体中文

gflow is a lightweight scheduler for a single Linux machine. It brings a Slurm-like workflow to shared GPU workstations and lab servers without cluster setup.

asciicast

Why gflow

  • Queue and run jobs on one machine.
  • Submit commands or scripts with GPUs, time limits, dependencies, arrays, and priorities.
  • Inspect, attach, cancel, and recover jobs with a small CLI.

Install

Requirements: Linux, tmux, and NVIDIA drivers only if you need GPU scheduling.

Install with Python tooling:

uv tool install runqd
# or
pipx install runqd
# or
pip install runqd

Install with Cargo:

cargo install gflow

Nightly build:

pip install --index-url https://test.pypi.org/simple/ runqd

Quick Start

gflowd init
gflowd up
gbatch --gpus 1 --name demo bash -lc 'echo "hello from gflow"; sleep 30'
gqueue
gjob show <job_id>
gflowd down

MCP

gflow can also run as a local MCP server for Claude Desktop, Claude Code, Codex, Cursor, and similar tools:

gflow mcp serve

Keep gflowd running on the same machine. MCP clients start gflow mcp serve as a local stdio server.

Claude Desktop example:

Claude Code:

claude mcp add --scope user gflow -- gflow mcp serve

Codex:

codex mcp add gflow -- gflow mcp serve

Or via ~/.codex/config.toml:

[mcp_servers.gflow]
command = "gflow"
args = ["mcp", "serve"]

If gflow is not on your PATH, replace it with the absolute binary path.

Documentation

Most usage details live in the docs:

Contributing

Please open an Issue or Pull Request.

License

MIT. See LICENSE.

About

A lightweight, single-node GPU job scheduler implemented in Rust.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Sponsor this project

  •  

Contributors

Languages