Skip to content

fix(core): backfill OpenAI responses model in spans#19810

Open
meronogbai wants to merge 2 commits intogetsentry:developfrom
meronogbai:fix/openai-responses-model-backfill
Open

fix(core): backfill OpenAI responses model in spans#19810
meronogbai wants to merge 2 commits intogetsentry:developfrom
meronogbai:fix/openai-responses-model-backfill

Conversation

@meronogbai
Copy link

@meronogbai meronogbai commented Mar 14, 2026

Summary

OpenAI Responses API calls can omit model in the request payload when using stored prompts. In that case, Sentry currently records gen_ai.request.model = "unknown" even though the final OpenAI response includes the actual model.

This change keeps the existing request-time fallback behavior, but overwrites the request model from response.model once the Responses API response arrives. It also updates the span name from chat unknown to chat .

This is now handled for both:

  • non-streaming Responses API calls
  • streaming Responses API calls

Agents dashboard screenshots

In production where this change isn't applied:

image

In development where this change was applied manually:

image

Before submitting a pull request, please take a look at our
Contributing guidelines and verify:

  • If you've added code that should be tested, please add tests.
  • Ensure your code lints and the test suite passes (yarn lint) & (yarn test).
  • Link an issue if there is one related to your pull request. If no issue is linked, one will be auto-generated and linked.

Closes #issue_link_here

@meronogbai meronogbai marked this pull request as ready for review March 14, 2026 22:44
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

@meronogbai meronogbai force-pushed the fix/openai-responses-model-backfill branch from 57deae1 to 30ee31b Compare March 14, 2026 23:08
Copy link
Member

@nicohrubec nicohrubec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, this approach doesn't really make sense to me. If there is no model defined at request time by the user then it seems to me semantically incorrect to backfill the request model retrospectively. Can you maybe explain the use case here a bit more and what exactly you are trying to achieve? If your concern is mainly that the UI shows the actually used model then this is potentially something that should be fixed in the product instead of the SDK.

@meronogbai
Copy link
Author

meronogbai commented Mar 23, 2026

The model is defined in openai's prompt dashboard. So it's defined elsewhere but not in the code calling the openai sdk.

See example in

https://developers.openai.com/api/docs/guides/prompting#create-a-prompt

My concern is the UI, as you said. I'm concrned about different models being under "unknown" . Can you point to where I should fix it if not here?

@meronogbai meronogbai requested a review from nicohrubec March 23, 2026 13:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants