-
-
Notifications
You must be signed in to change notification settings - Fork 6k
Description
What happened?
litellm.image_generation() accepts extra_headers via **kwargs, extracts it, and merges it into a local headers dict — but never passes that dict to openai_chat_completions.image_generation() on the openai / litellm_proxy / openai_compatible_providers code path.
This means any custom HTTP headers (e.g., W3C traceparent for distributed tracing, or custom auth headers) are silently dropped for image generation calls routed through the OpenAI-compatible path.
The azure and azure_ai code paths in the same function correctly forward extra_headers — only the openai path is broken.
Expected behavior
extra_headers passed to litellm.image_generation() should be forwarded to the HTTP request for all provider code paths, including openai, litellm_proxy, and openai_compatible_providers.
Actual behavior
extra_headers is silently dropped. The headers are extracted from kwargs and merged into a local dict, but that dict is never passed to the downstream openai_chat_completions.image_generation() call.
Steps to Reproduce
import litellm
# This should send the traceparent header to the proxy, but it doesn't
response = litellm.image_generation(
model="dall-e-3", # or any model routed via openai/litellm_proxy path
prompt="A red circle on white background",
n=1,
size="1024x1024",
extra_headers={"traceparent": "00-abc123-def456-01"},
)
# The traceparent header is NOT present in the outgoing HTTP requestTo verify, enable HTTP-level logging or inspect outgoing requests — the traceparent header will be absent despite being passed via extra_headers.
Root Cause Analysis
1. extra_headers is correctly extracted and merged (litellm/main.py)
In image_generation(), extra_headers is pulled from **kwargs and merged into a local headers dict:
# litellm/main.py, inside image_generation()
extra_headers = kwargs.get("extra_headers", None) # line ~232
headers: dict = kwargs.get("headers", None) or {}
if extra_headers is not None:
headers.update(extra_headers) # line ~2362. Azure paths correctly forward headers ✅
Both azure and azure_ai paths pass headers=headers to their downstream calls:
# azure path (line ~393):
model_response = azure_chat_completions.image_generation(
...
headers=headers, # ✅ forwarded
)
# azure_ai path (line ~435):
if extra_headers is not None:
optional_params["extra_headers"] = extra_headers # ✅ forwarded via optional_params
...
model_response = azure_chat_completions.image_generation(
...
headers=headers, # ✅ forwarded
)3. OpenAI path does NOT forward headers ❌
The openai / litellm_proxy / openai_compatible_providers path calls openai_chat_completions.image_generation() without headers:
# litellm/main.py, line ~474:
elif (
custom_llm_provider == "openai"
or custom_llm_provider == LlmProviders.LITELLM_PROXY.value
or custom_llm_provider in litellm.openai_compatible_providers
):
organization: Optional[str] = kwargs.get("organization", None)
model_response = openai_chat_completions.image_generation(
model=model,
prompt=prompt,
timeout=timeout,
api_key=api_key or dynamic_api_key,
api_base=api_base,
logging_obj=litellm_logging_obj,
optional_params=optional_params,
model_response=model_response,
organization=organization,
aimg_generation=aimg_generation,
client=client,
# ❌ NO headers= parameter
# ❌ extra_headers NOT added to optional_params
)4. OpenAIChatCompletion.image_generation() doesn't accept headers either
The downstream method (litellm/llms/openai/openai.py, line ~1436) has no headers or extra_headers parameter:
def image_generation(
self, model, prompt, timeout, optional_params, logging_obj,
api_key=None, api_base=None, model_response=None,
client=None, aimg_generation=None, organization=None,
) -> ImageResponse:
data = {"model": model, "prompt": prompt, **optional_params}
...
openai_client = self._get_openai_client(...)
_response = openai_client.images.generate(**data, timeout=timeout)The OpenAI Python SDK's images.generate() does support extra_headers as a parameter. So the fix needs to:
- Pass
extra_headersintooptional_params(or as a separate parameter) inlitellm/main.py - Extract and pass
extra_headerstoopenai_client.images.generate()inOpenAIChatCompletion.image_generation()
Suggested Fix
Option A (minimal, consistent with azure_ai pattern): Add extra_headers to optional_params before the openai call in litellm/main.py:
# In image_generation(), before the openai_chat_completions.image_generation() call:
if extra_headers is not None:
optional_params["extra_headers"] = extra_headersSince optional_params is unpacked into data and then passed to openai_client.images.generate(**data), this will forward extra_headers to the OpenAI SDK, which natively supports it.
Option B (more robust): Also add explicit headers parameter support to OpenAIChatCompletion.image_generation(), matching the azure handler pattern.
Relevant log output
No error output — the headers are silently dropped with no warning or error message.
What part of LiteLLM is this about?
SDK (litellm Python package)
What LiteLLM version are you on?
v1.81.16
Twitter / LinkedIn details
N/A