Inspired by KNIME. Made for the cloud. Built to never die.
Project Continuum is split across several focused repositories:
| Repository | Description |
|---|---|
| Continuum (this repo) | Core backend — API server, worker framework, shared libraries |
| continuum-workbench | Browser IDE — Eclipse Theia + React Flow workflow editor |
| continuum-feature-base | Base analytics nodes — data transforms, REST, scripting, anomaly detection |
| continuum-feature-ai | AI/ML nodes — LLM fine-tuning with Unsloth + LoRA |
| continuum-feature-template | Template — scaffold your own custom worker with nodes |
This monorepo contains the core backend infrastructure for Project Continuum:
| Module | Purpose |
|---|---|
continuum-commons |
Shared library — node model base classes, data types, Parquet/S3 utilities |
continuum-worker-springboot-starter |
Spring Boot starter for building workers — auto-registers nodes with Temporal |
continuum-api-server |
REST API server — manages workflows, node registry, executions |
continuum-message-bridge |
Kafka-to-MQTT bridge — streams execution events to the browser |
continuum-avro-schemas |
Shared Avro schemas for Kafka messages |
continuum-knime-base |
KNIME compatibility layer (experimental) |
Looking for workflow nodes? See continuum-feature-base and continuum-feature-ai.
Looking for the UI? See continuum-workbench.
- Truly cloud-native — not a desktop app ported to the web. Built from day one for browsers, containers, and distributed infrastructure.
- Crash-proof by design — powered by Temporal, workflows survive process crashes, network failures, and restarts without losing a single step.
- Watch it happen live — every node execution streams back to your browser in real time via Kafka → MQTT. No refresh. No polling.
- Data stays fast — nodes pass Apache Parquet tables, not JSON blobs. Columnar, compressed, query-ready from the start.
- Extend without breaking — add new capabilities by deploying new workers, not by touching existing ones. Zero downtime. Zero coupling.
Start with a drop. One node. Two. Transform. Branch. Loop.
Each step tiny. But at the end — it's a river. A request turned system. A click turned outcome.
Most tools look good. Then break. We want graphs that keep running — even if Kafka dies, even if S3 lags, even if your code crashes.
| Feature | Description | |
|---|---|---|
| 🎨 | Browser-Native Canvas | Drag-and-drop workflow editor — real IDE feel, zero install |
| 🔁 | Indestructible Execution | Workflows survive crashes, restarts, and infrastructure failures |
| ⚡ | Live Streaming Updates | Watch your workflow execute step-by-step in real time |
| 📊 | Columnar Data Passing | Parquet tables between nodes — fast, query-ready |
| 🧪 | AI / ML Ready | Train models with Unsloth, run inference, all inside your flow |
| 🐳 | Self-Hostable | Docker Compose up and you're running |
┌─────────────────────────────────────────────────────┐
│ BROWSER │
│ Eclipse Theia + React Flow (drag & drop canvas) │
└──────────────────────┬──────────────────────────────┘
│ WebSocket / REST
▼
┌─────────────────────────────────────────────────────┐
│ BACKEND (Kotlin + Spring Boot) │
│ Typed, clean, contract-safe API server │
└──────┬──────────────────────────────┬───────────────┘
│ │
▼ ▼
┌──────────────┐ ┌───────────────────────┐
│ Temporal │ │ Kafka → MQTT (WS) │
│ Durable │ │ Live event stream │
│ Execution │ │ step-by-step updates │
└──────────────┘ └───────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ Storage: AWS S3 / MinIO (local dev) │
│ Format: Apache Parquet — columnar, fast │
└─────────────────────────────────────────────────────┘
| Layer | Technology | Why |
|---|---|---|
| Canvas | Eclipse Theia + React Flow | Full IDE experience in the browser |
| Engine | Temporal | Durable execution, auto-retry, infinite scale |
| Events | Kafka → MQTT over WebSockets | Real-time step-by-step workflow updates |
| Data | Apache Parquet | Fast, columnar, query-ready inter-node data |
| Storage | AWS S3 / MinIO | Open, portable, no vendor lock-in |
| Backend | Kotlin + Spring Boot | Type-safe, clean, battle-tested |
| Resilience | Temporal | Fails? Retries. Crashes? Recovers. Forever. |
| Flow Control | Output null on a port = flow stops |
Simple guard logic. Real loops coming. |
🧠 Zero-config IDE setup — This repo ships with shared IntelliJ IDEA Run Configurations in the
.run/directory. Just open the project and they'll be auto-detected in your Run/Debug toolbar — no manual setup needed.Included configurations: ApiServer, MessageBridge
- Open the project in IntelliJ IDEA
- Start infrastructure:
cd docker && docker compose up -d
- Select a run configuration from the toolbar and hit
▶️ — start ApiServer and MessageBridge - Start a feature worker from one of the feature repos (e.g., continuum-feature-base)
- Start the browser UI from continuum-workbench
- Open http://localhost:3002 and start building workflows!
# Clone the repo
git clone https://github.com/projectcontinuum/Continuum.git
cd Continuum
# Spin up infrastructure (Temporal, Kafka, MinIO, Mosquitto)
cd docker
docker compose up -d
# Build & run the backend
./gradlew :continuum-api-server:bootRun --args='--server.port=8080'
# (In another terminal) Build and run Message Bridge (Kafka → MQTT)
./gradlew :continuum-message-bridge:bootRun --args='--server.port=8082'
# Start a feature worker from a feature repo (e.g., continuum-feature-base)
# See: https://github.com/projectcontinuum/continuum-feature-base
# Start the browser UI from the workbench repo
# See: https://github.com/projectcontinuum/continuum-workbench💡 Tip: Pass any
spring.*property via--args:./gradlew :continuum-api-server:bootRun --args='--spring.profiles.active=dev --server.port=9090'
💡 Full setup guide coming soon. For now — explore, break things, open issues.
Don't bloat. Distribute.
Most workflow engines pack every capability into a single worker — and when it breaks, everything breaks.
Continuum takes a different path:
- One worker = one set of capabilities. A worker that handles REST calls doesn't need to know about chemistry or AI training.
- Add capabilities by adding workers — not by inflating existing ones. Need RDKit nodes? Spin up an RDKit worker. Need Unsloth? That's its own worker. The core stays untouched.
- Zero downtime for everyone else. Deploy, update, or crash a worker — other workers keep running. No shared fate.
- Anyone can host a worker. You can write and run your own worker offering custom nodes — plug it into the shared registry, and the platform discovers it automatically.
┌────────────┐ ┌────────────┐ ┌────────────┐ ┌──────────────┐
│ Base Worker │ │ AI Worker │ │ Chem Worker │ │ Your Worker │
│ REST, CSV │ │ Unsloth │ │ RDKit │ │ Anything! │
│ Transform │ │ Inference │ │ Molecules │ │ Your nodes │
└─────┬──────┘ └─────┬──────┘ └─────┬──────┘ └──────┬───────┘
│ │ │ │
└───────────────┴───────┬───────┴─────────────────┘
▼
┌──────────────────┐
│ Shared Registry │
│ (Auto-discovery) │
└──────────────────┘
This is the vision: a marketplace of workers — lightweight, independent, community-driven.
- Drag-and-drop visual workflow editor
- Durable execution with Temporal
- Live streaming updates via Kafka → MQTT
- Parquet-based data passing between nodes
- Base node library (Transform, REST, Branch, etc.)
- Unsloth AI training node
- IntelliJ IDEA shared run configurations — zero-config dev setup
- Multi-repo architecture — feature nodes developed independently as separate workers
- 🔁 True
while/forloops with condition builder - 🖥️ Electron standalone — run Continuum as a native desktop app, no browser required
- 🧪 More RDKit chemistry nodes — full RDKit integration for molecular workflows
- 🔥 PyTorch nodes — training, inference, and model management natively in your flow
- 🤖 Full AI training node suite (Unsloth ecosystem)
- 🔌 Plugin store — Slack, Stripe, Databases, AI services
- 🙋 Human-in-the-loop — interactive workflows with approval gates, manual review steps, and pause/resume
- 🐛 Visual debugger with timeline replay
- 👥 Auth, multi-tenancy & RBAC — authentication, role-based access control, and approval workflows baked in
- 🏗️ Multi-worker ecosystem — bring your own worker with custom nodes, auto-discovered via shared registry, zero downtime for others
- 📒 Central node repository — a single registry where all workers publish their available nodes, making them discoverable and composable across the platform
- ☸️ Helm chart — production-ready Kubernetes deployment for horizontal scaling
- 📦 Zero-config self-host with
docker compose up
We're launching a dedicated YouTube channel with deep dives, demos, and architecture walkthroughs. Subscribe to stay in the loop:
- 🎥 Live workflow builds — watch real pipelines come together from scratch
- 🧠 Architecture breakdowns — how Temporal, Kafka, and the worker ecosystem fit together
- 💬 Community discussions — Q&A, roadmap talks, and contributor spotlights
🔔 Channel link dropping soon — star the repo so you don't miss it!
We don't want perfect. We want working.
If you see the gap — fill it. Check out the Issues page, pick something, and send a PR.
First time? Look for issues labeled good first issue. We're friendly.
Apache 2.0 — open, safe, patent-protected.
| Repository | Description |
|---|---|
| Continuum (this repo) | Core backend — API server, worker framework, shared libraries |
| continuum-workbench | Browser IDE — Eclipse Theia + React Flow workflow editor |
| continuum-feature-base | Base analytics nodes — data transforms, REST, scripting, anomaly detection |
| continuum-feature-ai | AI/ML nodes — LLM fine-tuning with Unsloth + LoRA |
| continuum-feature-template | Template — scaffold your own custom worker with nodes |


