Skip to content

fix(openrouter): pattern-based fix for native model double-stripping#22320

Open
tombii wants to merge 1 commit intoBerriAI:mainfrom
tombii:fix/openrouter-native-model-double-stripping
Open

fix(openrouter): pattern-based fix for native model double-stripping#22320
tombii wants to merge 1 commit intoBerriAI:mainfrom
tombii:fix/openrouter-native-model-double-stripping

Conversation

@tombii
Copy link

@tombii tombii commented Feb 27, 2026

Relevant issues

Fixes #16353
Supersedes #20516

Pre-Submission checklist

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

PR #20516 fixed the double-stripping issue for three hard-coded native OpenRouter models (openrouter/auto, openrouter/free, openrouter/bodybuilder) by maintaining a static set. However, this approach will not fix the issue for other native OpenRouter models such as openrouter/aurora-alpha or openrouter/polaris-alpha.

This PR replaces the hardcoded set with a pattern-based check in _get_openai_compatible_provider_info: after stripping the outer openrouter/ provider prefix, if the remaining model name still starts with openrouter/, return immediately without stripping again.

# Before (hardcoded set — won't cover openrouter/aurora-alpha etc.)
NATIVE_OPENROUTER_MODELS = {"openrouter/auto", "openrouter/free", "openrouter/bodybuilder"}
if model in NATIVE_OPENROUTER_MODELS:
    return model, "openrouter", dynamic_api_key, api_base

# After (pattern-based — handles any native OpenRouter model)
if custom_llm_provider == "openrouter" and model.startswith("openrouter/"):
    dynamic_api_key = api_key or get_secret_str("OPENROUTER_API_KEY")
    return model, custom_llm_provider, dynamic_api_key, api_base

This handles all current and future native OpenRouter models without needing a list to maintain.

Tests added

tests/test_litellm/llms/openrouter/test_openrouter_provider_routing.py:

  • Double-prefixed native models strip correctly (including aurora-alpha, polaris-alpha, and arbitrary future models)
  • Second call to get_llm_provider on an already-stripped native model does not strip further
  • Regular OpenRouter models (e.g. openrouter/anthropic/claude-3-haiku) still strip normally

…-stripping

Replace the hardcoded NATIVE_OPENROUTER_MODELS set approach with a
pattern-based check in _get_openai_compatible_provider_info: after
stripping the outer "openrouter/" provider prefix, if the remaining
model name still starts with "openrouter/", return immediately without
further stripping.

This fixes openrouter/openrouter/aurora-alpha, openrouter/openrouter/polaris-alpha,
and any future native OpenRouter models — not just the three hard-coded
ones (auto, free, bodybuilder) from the previous approach.

Fixes BerriAI#16353

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Feb 27, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 27, 2026 10:24pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 27, 2026

Greptile Summary

This PR replaces a hardcoded set of native OpenRouter models (openrouter/auto, openrouter/free, openrouter/bodybuilder) with a pattern-based check that handles any model whose ID starts with openrouter/ on the OpenRouter API. The core fix in _get_openai_compatible_provider_info is clean and correct — it prevents the openrouter/ prefix from being stripped twice when a user specifies openrouter/openrouter/<model>.

  • The pattern-based early return in get_llm_provider_logic.py correctly detects when the remaining model name still starts with openrouter/ after the outer provider prefix is stripped, and returns immediately to preserve the native model ID
  • Tests for the primary use case (double-prefixed input → single strip) and regular model passthrough are well-structured
  • Issue: test_no_double_strip_on_second_call has an incorrect assertion — passing an already-resolved model name like openrouter/aurora-alpha back through get_llm_provider will strip it to aurora-alpha, not preserve it. This test likely fails as written.

Confidence Score: 3/5

  • Core fix is correct but one test has a likely-failing assertion that needs to be verified and fixed before merge
  • The main code change is sound and well-targeted — it solves the double-stripping issue for all native OpenRouter models without maintaining a hardcoded list. However, the test_no_double_strip_on_second_call test appears to assert incorrect expected values, which suggests it either wasn't run or there's a code path I'm not seeing. This needs verification before merging.
  • tests/test_litellm/llms/openrouter/test_openrouter_provider_routing.py — the test_no_double_strip_on_second_call test's second-call assertion appears incorrect

Important Files Changed

Filename Overview
litellm/litellm_core_utils/get_llm_provider_logic.py Adds a pattern-based early return for native OpenRouter models (e.g. openrouter/openrouter/auto) to prevent double-stripping of the openrouter/ prefix. The logic is clean and correctly placed within the existing provider routing function.
tests/test_litellm/llms/openrouter/test_openrouter_provider_routing.py Good test coverage overall, but test_no_double_strip_on_second_call has incorrect assertions — the second get_llm_provider call on an already-resolved model like openrouter/aurora-alpha will strip it further to aurora-alpha, failing the assertion that expects idempotent output.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["User input: openrouter/openrouter/aurora-alpha"] --> B["get_llm_provider()"]
    B --> C{"model prefix in provider_list?"}
    C -->|Yes: openrouter| D["_get_openai_compatible_provider_info()"]
    D --> E["Split: provider='openrouter', model='openrouter/aurora-alpha'"]
    E --> F{"provider=='openrouter' AND\nmodel starts with 'openrouter/'?"}
    F -->|Yes - NEW CHECK| G["Early return:\nmodel='openrouter/aurora-alpha'\nprovider='openrouter'"]
    F -->|No| H["Continue to provider elif chain"]
    H --> I["Return model with provider-specific\napi_base and api_key"]

    J["User input: openrouter/anthropic/claude-3-haiku"] --> B
    D --> K["Split: provider='openrouter', model='anthropic/claude-3-haiku'"]
    K --> F
Loading

Last reviewed commit: d292da2

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment on lines +53 to +60
def test_no_double_strip_on_second_call(self, input_model, expected_model):
"""Simulates two consecutive get_llm_provider calls (bridge → completion)."""
model_first, provider, _, _ = litellm.get_llm_provider(model=input_model)
assert model_first == expected_model

model_second, provider2, _, _ = litellm.get_llm_provider(model=model_first)
assert provider2 == "openrouter"
assert model_second == expected_model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test assertion appears incorrect for second call

On the second call, litellm.get_llm_provider(model="openrouter/aurora-alpha") enters _get_openai_compatible_provider_info where the model is split: custom_llm_provider = "openrouter", model = "aurora-alpha". Since "aurora-alpha" does not start with "openrouter/", the new early-return guard doesn't trigger, and the function returns model = "aurora-alpha" (not "openrouter/aurora-alpha").

This means the assertion model_second == expected_model (i.e., "aurora-alpha" == "openrouter/aurora-alpha") should fail. The second get_llm_provider call strips the prefix as designed — this is arguably the correct behavior when the caller passes an already-resolved model name back in. The test should either:

  1. Assert model_second == "aurora-alpha" (the actual output), or
  2. Not pass the already-resolved model back through get_llm_provider without re-adding the openrouter/ provider prefix.

@krrishdholakia
Copy link
Member

@tombii can you address the greptile feedback please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Consuming openrouter/polaris-alpha via Anthropic /v1/messages doesn't work

2 participants