I am poor#19
Open
SucksToBeAnik wants to merge 2 commits intotsensei:mainfrom
Open
Conversation
…nd CLI options - Added support for Ollama as a local LLM and image provider, including interactive model selection. - Introduced Chatterbox TTS for text-to-speech functionality, with setup instructions and requirements. - Updated README to reflect new prerequisites and usage instructions for local development. - Enhanced CLI options to include new parameters for Ollama and Chatterbox configurations. - Improved cost estimation logic to account for free local providers. - Added validation for local provider availability and setup processes. This update significantly expands the capabilities of the pipeline for local development and usage.
…topic context collection - Updated `collectTopicBrief` function to include LLM provider as a parameter, allowing for dynamic question generation based on the selected LLM. - Improved user interaction flow for providing topic context, offering options for guided questions, freeform input, or skipping. - Enhanced cost estimation logic to accommodate the new LLM provider parameter. - Refined error handling and output messages in the Chatterbox TTS provider for better user experience. - Updated relevant interfaces and types to ensure consistency across the pipeline. These changes significantly improve the flexibility and usability of the pipeline for local development.
tsensei
requested changes
Apr 2, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
feat: zero-API-key local mode (Ollama LLM + Chatterbox TTS + Ollama image gen)
This PR adds a complete local-first pipeline that requires no API keys, enabling anyone to run OpenReels entirely on their own hardware using open-source models. Each provider is independent, so users can freely mix free local providers with paid cloud ones.
What's new
🦙 Ollama LLM provider (
--provider ollama)OllamaLLMprovider hitting Ollama's local/api/chatendpoint with structured JSON outputResearchResultreturned whenenableWebSearch=trueso the pipeline continues seamlessly🖼️ Ollama image generation (
--image-provider ollama)OllamaImageprovider using Ollama's experimental/api/generateimage endpointx/flux2-klein:4b,x/flux2-klein:9b,x/z-image-turbo:latestdata.image(singular), notdata.images[]🎙️ Chatterbox Turbo TTS (
--tts-provider chatterbox)ChatterboxTTSprovider bridging to a Python subprocess (scripts/chatterbox_tts.py)chatterbox-ttsinto an isolated venv at~/.openreels/chatterbox-venvon first use — no manual setup requireduvand uses it when available (10–100× faster installs, handles uv-managed Pythons that blockensurepip)python3.12/python3.11withpython -m venvsetuptools<70to fixpkg_resourcesimport required by theperthwatermarker dependencyspawn(notspawnSync) to keep the Node event loop unblocked during model loadUX highlights
Interactive model selection
When running with

--provider ollamaor--image-provider ollamawithout specifying model flags, OpenReels presents a numbered selection menu. Pulled models are shown first with a ✓ marker using their exact pulled tag — Ollama returns 404 for bare names likegemma3. The image model list is locked to only the two image-capable models so LLM models never leak in.Interactive topic brief (replaces web search)
Since Ollama has no web search, a guided context-gathering flow runs before the pipeline. Three modes:
key_factsfor the creative directorsummaryCan also be supplied non-interactively via
--brief "..."for Docker/CI use.New CLI flags
--provider ollama--tts-provider chatterbox--image-provider ollama--ollama-model <tag>llama3.1:8b)--ollama-image-model <tag>x/flux2-klein:4b)--ollama-host <url>http://localhost:11434)--brief <text>--chatterbox-device <device>cpu,cuda, ormps--chatterbox-audio-prompt <path>Example commands
Known issue
text_cardscenes may show a prompt description instead of display text. When using smaller local models, the creative director sometimes writes thevisual_promptfortext_cardscenes as a style description ("Bold white text: 'Falling into the cave.'") instead of just the display text ("Falling into the cave."). This is a model instruction-following limitation — smaller models don't reliably follow the constraint to write only the verbatim display text.Testing
Prerequisites: Ollama installed and running (
ollama serve), at least one LLM model pulled (ollama pull llama3.1:8b), Python 3.11 or 3.12 on PATH.