Surface finish_reason on assistant messages and token usage events#2254
Open
trungutt wants to merge 1 commit intodocker:mainfrom
Open
Surface finish_reason on assistant messages and token usage events#2254trungutt wants to merge 1 commit intodocker:mainfrom
trungutt wants to merge 1 commit intodocker:mainfrom
Conversation
derekmisler
previously approved these changes
Mar 26, 2026
Add FinishReason to chat.Message and MessageUsage so API consumers can distinguish the root agent's final response from intermediate tool-call turns during live streaming. - Propagate provider's explicit finish_reason through the streaming pipeline (stop/length via early return, tool_calls tracked and preserved after the stream loop) - Infer finish_reason when the provider sends a bare EOF: tool calls present → tool_calls, content present → stop, nothing → null - Validate finish_reason against actual stream output (tool_calls requires tool calls, stop is overridden when tool calls exist) - Reconstruct LastMessage on session restore so FinishReason is available for historical sessions (scoped to parent session only)
b718ef7 to
25bea5c
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Surface the LLM's
finish_reasonthrough the streaming pipeline so API consumers can identify the root agent's final response during live streaming.Motivation
finish_reasonis a standard field returned by all major LLM providers (OpenAI, Anthropic, Google) indicating why the model stopped generating. Our provider adapters already parse it correctly, buthandleStream()collapses it to aStopped: bool, losing the distinction betweenstop,tool_calls, andlength.Clients consuming the SSE API need to distinguish the agent's final answer from intermediate tool-call turns — for example, to avoid grouping the final response with working activity. Today there is no way to do this at the right time:
Message shape is unreliable: a sub-agent's response has no
tool_callsand looks identical to the root agent's final answer. AStopped: trueresponse can still contain tool calls (fire-and-forget). An EOF without finish reason can produce a content-only message that isn't final.stream_stoppedarrives too late: by the time it fires, the message is already rendered and grouped with intermediate steps.token_usagefires at the right moment (immediately after each assistant turn), and already carriessession_id. Withfinish_reasonadded, a client can test:This identifies the root agent's final response before grouping, while correctly ignoring sub-agent
stopturns (differentsession_id).Changes
finish_reasonis now available in two places:MessageUsage.FinishReason— ontoken_usageSSE events, available during live streamingchat.Message.FinishReason— persisted in session history, available when loading past sessions