fix(core): backfill OpenAI responses model in spans#19810
fix(core): backfill OpenAI responses model in spans#19810meronogbai wants to merge 2 commits intogetsentry:developfrom
Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
57deae1 to
30ee31b
Compare
nicohrubec
left a comment
There was a problem hiding this comment.
Hey, this approach doesn't really make sense to me. If there is no model defined at request time by the user then it seems to me semantically incorrect to backfill the request model retrospectively. Can you maybe explain the use case here a bit more and what exactly you are trying to achieve? If your concern is mainly that the UI shows the actually used model then this is potentially something that should be fixed in the product instead of the SDK.
|
The model is defined in openai's prompt dashboard. So it's defined elsewhere but not in the code calling the openai sdk. See example in https://developers.openai.com/api/docs/guides/prompting#create-a-prompt My concern is the UI, as you said. I'm concrned about different models being under "unknown" . Can you point to where I should fix it if not here? |

Summary
OpenAI Responses API calls can omit model in the request payload when using stored prompts. In that case, Sentry currently records gen_ai.request.model = "unknown" even though the final OpenAI response includes the actual model.
This change keeps the existing request-time fallback behavior, but overwrites the request model from response.model once the Responses API response arrives. It also updates the span name from chat unknown to chat .
This is now handled for both:
Agents dashboard screenshots
In production where this change isn't applied:
In development where this change was applied manually:
Before submitting a pull request, please take a look at our
Contributing guidelines and verify:
yarn lint) & (yarn test).Closes #issue_link_here