Skip to content

feat: add MiniMax as alternative LLM provider#45

Open
octo-patch wants to merge 2 commits intoCopilotKit:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as alternative LLM provider#45
octo-patch wants to merge 2 commits intoCopilotKit:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add multi-provider LLM factory (llm_provider.py) with auto-detection from API keys
  • MiniMax M2.7 / M2.7-highspeed supported via OpenAI-compatible API with temperature clamping
  • New env vars: LLM_PROVIDER, LLM_BASE_URL, LLM_TEMPERATURE, MINIMAX_API_KEY
  • Updated .env.example and README model table with MiniMax docs

Changes

File Change
apps/agent/src/llm_provider.py New provider factory with presets, auto-detection, temp clamping
apps/agent/main.py Replace hardcoded ChatOpenAI() with create_llm() factory
.env.example Add MiniMax API key and provider config docs
README.md Add MiniMax M2.7 to model table + usage instructions
apps/agent/tests/ 28 unit tests + 3 integration tests

How it works

Set MINIMAX_API_KEY in your .env and the provider is auto-detected. Defaults to MiniMax-M2.7 (1M context window).

Existing OpenAI usage is fully backward-compatible.

Test plan

  • 28 unit tests covering provider detection, presets, factory, temperature clamping, edge cases
  • 3 integration tests against real MiniMax API (basic completion, streaming, multi-turn)
  • Manual verification with make dev using MiniMax M2.7 for generative UI

Add multi-provider LLM factory with auto-detection from API keys.
MiniMax M2.7/M2.7-highspeed models supported via OpenAI-compatible API
with temperature clamping and configurable base URL.

- New llm_provider.py module with create_llm() factory
- Provider auto-detection: MINIMAX_API_KEY → minimax, else openai
- LLM_PROVIDER, LLM_BASE_URL, LLM_TEMPERATURE env vars
- 28 unit tests + 3 integration tests
- Updated .env.example and README with MiniMax docs
Copy link
Copy Markdown

@JiwaniZakir JiwaniZakir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In apps/agent/src/llm_provider.py, the API key resolution in create_llm() has a silent fallback that can cause confusing failures:

api_key = os.environ.get(api_key_env) or os.environ.get("OPENAI_API_KEY", "")

When LLM_PROVIDER=minimax is set explicitly but MINIMAX_API_KEY is absent, this silently passes the OPENAI_API_KEY value to MiniMax's endpoint — which will produce a cryptic authentication error rather than a clear configuration message. The fallback to OPENAI_API_KEY should be removed or replaced with an explicit ValueError when the required key for the selected provider is missing.

Additionally, _detect_provider() returns an unvalidated string, so an unrecognized LLM_PROVIDER value (e.g., "anthropic") causes PROVIDER_PRESETS.get(provider, {}) to silently return an empty dict, and create_llm() proceeds with OpenAI defaults — no warning emitted. Adding a check like if explicit and explicit not in PROVIDER_PRESETS: raise ValueError(...) would catch misconfiguration early.

The test file (test_llm_provider.py) appears to cover the auto-detection paths well, but given the above, a test asserting that setting LLM_PROVIDER=minimax without MINIMAX_API_KEY raises an error (rather than silently using the OpenAI key) would be valuable.

When LLM_PROVIDER is explicitly set to a non-openai provider (e.g.
minimax) but the corresponding API key env var is absent, the code now
raises a clear ValueError instead of silently falling back to
OPENAI_API_KEY, which would cause cryptic auth errors at the wrong
endpoint. Unrecognized LLM_PROVIDER values also raise ValueError with
a list of supported providers.

Co-Authored-By: Octopus <liyuan851277048@icloud.com>
@octo-patch
Copy link
Copy Markdown
Author

Good catch @JiwaniZakir! Fixed in the latest push:

  1. Explicit error for missing API key: When a provider is explicitly selected via LLM_PROVIDER but its API key env var is not set, the code now raises a clear ValueError instead of silently falling back to OPENAI_API_KEY.
  2. Provider validation: Unrecognized LLM_PROVIDER values now raise a ValueError with a list of supported providers.

The OPENAI_API_KEY fallback is only used when auto-detecting the provider (no explicit LLM_PROVIDER set).

@JiwaniZakir
Copy link
Copy Markdown

The temperature clamping logic in llm_provider.py is worth documenting explicitly in .env.example — MiniMax clamps to [0.01, 1.0] which is a non-obvious constraint that will silently alter behavior if someone passes temperature=0 expecting deterministic output. Consider raising a warning log when clamping occurs rather than silently adjusting the value. The auto-detection from API key presence is clean, but priority order matters if someone has both OPENAI_API_KEY and MINIMAX_API_KEY set — the README should clarify that LLM_PROVIDER takes precedence to avoid surprises.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants