fix(openrouter): pattern-based fix for native model double-stripping#22320
fix(openrouter): pattern-based fix for native model double-stripping#22320tombii wants to merge 1 commit intoBerriAI:mainfrom
Conversation
…-stripping Replace the hardcoded NATIVE_OPENROUTER_MODELS set approach with a pattern-based check in _get_openai_compatible_provider_info: after stripping the outer "openrouter/" provider prefix, if the remaining model name still starts with "openrouter/", return immediately without further stripping. This fixes openrouter/openrouter/aurora-alpha, openrouter/openrouter/polaris-alpha, and any future native OpenRouter models — not just the three hard-coded ones (auto, free, bodybuilder) from the previous approach. Fixes BerriAI#16353 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR replaces a hardcoded set of native OpenRouter models (
Confidence Score: 3/5
|
| Filename | Overview |
|---|---|
| litellm/litellm_core_utils/get_llm_provider_logic.py | Adds a pattern-based early return for native OpenRouter models (e.g. openrouter/openrouter/auto) to prevent double-stripping of the openrouter/ prefix. The logic is clean and correctly placed within the existing provider routing function. |
| tests/test_litellm/llms/openrouter/test_openrouter_provider_routing.py | Good test coverage overall, but test_no_double_strip_on_second_call has incorrect assertions — the second get_llm_provider call on an already-resolved model like openrouter/aurora-alpha will strip it further to aurora-alpha, failing the assertion that expects idempotent output. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A["User input: openrouter/openrouter/aurora-alpha"] --> B["get_llm_provider()"]
B --> C{"model prefix in provider_list?"}
C -->|Yes: openrouter| D["_get_openai_compatible_provider_info()"]
D --> E["Split: provider='openrouter', model='openrouter/aurora-alpha'"]
E --> F{"provider=='openrouter' AND\nmodel starts with 'openrouter/'?"}
F -->|Yes - NEW CHECK| G["Early return:\nmodel='openrouter/aurora-alpha'\nprovider='openrouter'"]
F -->|No| H["Continue to provider elif chain"]
H --> I["Return model with provider-specific\napi_base and api_key"]
J["User input: openrouter/anthropic/claude-3-haiku"] --> B
D --> K["Split: provider='openrouter', model='anthropic/claude-3-haiku'"]
K --> F
Last reviewed commit: d292da2
| def test_no_double_strip_on_second_call(self, input_model, expected_model): | ||
| """Simulates two consecutive get_llm_provider calls (bridge → completion).""" | ||
| model_first, provider, _, _ = litellm.get_llm_provider(model=input_model) | ||
| assert model_first == expected_model | ||
|
|
||
| model_second, provider2, _, _ = litellm.get_llm_provider(model=model_first) | ||
| assert provider2 == "openrouter" | ||
| assert model_second == expected_model |
There was a problem hiding this comment.
Test assertion appears incorrect for second call
On the second call, litellm.get_llm_provider(model="openrouter/aurora-alpha") enters _get_openai_compatible_provider_info where the model is split: custom_llm_provider = "openrouter", model = "aurora-alpha". Since "aurora-alpha" does not start with "openrouter/", the new early-return guard doesn't trigger, and the function returns model = "aurora-alpha" (not "openrouter/aurora-alpha").
This means the assertion model_second == expected_model (i.e., "aurora-alpha" == "openrouter/aurora-alpha") should fail. The second get_llm_provider call strips the prefix as designed — this is arguably the correct behavior when the caller passes an already-resolved model name back in. The test should either:
- Assert
model_second == "aurora-alpha"(the actual output), or - Not pass the already-resolved model back through
get_llm_providerwithout re-adding theopenrouter/provider prefix.
|
@tombii can you address the greptile feedback please? |
Relevant issues
Fixes #16353
Supersedes #20516
Pre-Submission checklist
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitType
🐛 Bug Fix
Changes
PR #20516 fixed the double-stripping issue for three hard-coded native OpenRouter models (
openrouter/auto,openrouter/free,openrouter/bodybuilder) by maintaining a static set. However, this approach will not fix the issue for other native OpenRouter models such asopenrouter/aurora-alphaoropenrouter/polaris-alpha.This PR replaces the hardcoded set with a pattern-based check in
_get_openai_compatible_provider_info: after stripping the outeropenrouter/provider prefix, if the remaining model name still starts withopenrouter/, return immediately without stripping again.This handles all current and future native OpenRouter models without needing a list to maintain.
Tests added
tests/test_litellm/llms/openrouter/test_openrouter_provider_routing.py:aurora-alpha,polaris-alpha, and arbitrary future models)get_llm_provideron an already-stripped native model does not strip furtheropenrouter/anthropic/claude-3-haiku) still strip normally