feat: add MiniMax as alternative LLM provider#45
feat: add MiniMax as alternative LLM provider#45octo-patch wants to merge 2 commits intoCopilotKit:mainfrom
Conversation
Add multi-provider LLM factory with auto-detection from API keys. MiniMax M2.7/M2.7-highspeed models supported via OpenAI-compatible API with temperature clamping and configurable base URL. - New llm_provider.py module with create_llm() factory - Provider auto-detection: MINIMAX_API_KEY → minimax, else openai - LLM_PROVIDER, LLM_BASE_URL, LLM_TEMPERATURE env vars - 28 unit tests + 3 integration tests - Updated .env.example and README with MiniMax docs
JiwaniZakir
left a comment
There was a problem hiding this comment.
In apps/agent/src/llm_provider.py, the API key resolution in create_llm() has a silent fallback that can cause confusing failures:
api_key = os.environ.get(api_key_env) or os.environ.get("OPENAI_API_KEY", "")When LLM_PROVIDER=minimax is set explicitly but MINIMAX_API_KEY is absent, this silently passes the OPENAI_API_KEY value to MiniMax's endpoint — which will produce a cryptic authentication error rather than a clear configuration message. The fallback to OPENAI_API_KEY should be removed or replaced with an explicit ValueError when the required key for the selected provider is missing.
Additionally, _detect_provider() returns an unvalidated string, so an unrecognized LLM_PROVIDER value (e.g., "anthropic") causes PROVIDER_PRESETS.get(provider, {}) to silently return an empty dict, and create_llm() proceeds with OpenAI defaults — no warning emitted. Adding a check like if explicit and explicit not in PROVIDER_PRESETS: raise ValueError(...) would catch misconfiguration early.
The test file (test_llm_provider.py) appears to cover the auto-detection paths well, but given the above, a test asserting that setting LLM_PROVIDER=minimax without MINIMAX_API_KEY raises an error (rather than silently using the OpenAI key) would be valuable.
When LLM_PROVIDER is explicitly set to a non-openai provider (e.g. minimax) but the corresponding API key env var is absent, the code now raises a clear ValueError instead of silently falling back to OPENAI_API_KEY, which would cause cryptic auth errors at the wrong endpoint. Unrecognized LLM_PROVIDER values also raise ValueError with a list of supported providers. Co-Authored-By: Octopus <liyuan851277048@icloud.com>
|
Good catch @JiwaniZakir! Fixed in the latest push:
The |
|
The temperature clamping logic in |
Summary
Changes
How it works
Set MINIMAX_API_KEY in your .env and the provider is auto-detected. Defaults to MiniMax-M2.7 (1M context window).
Existing OpenAI usage is fully backward-compatible.
Test plan