Skip to content

[BUG] OpenAIResponsesModel uses input_text for assistant messages, causing multi-turn conversations to fail (Can't use ChatGPT 5.4!) #1850

@sagimedina1

Description

@sagimedina1

Checks

  • I have updated to the lastest minor and patch version of Strands
  • I have checked the documentation and this is not expected behavior
  • I have searched ./issues and there are no duplicates of my issue

Strands Version

v1.29.0

Python Version

3.12

Operating System

macOS 26.3

Installation Method

other

Steps to Reproduce

from strands import Agent
from strands.models.openai_responses import OpenAIResponsesModel

model = OpenAIResponsesModel(
model_id="gpt-5.4",
params={"max_output_tokens": 200, "reasoning": {"effort": "low"}},
)

agent = Agent(model=model, system_prompt="Be brief.")
agent("Say hello") # Turn 1 - succeeds
agent("Say goodbye") # Turn 2 - fails

Expected Behavior

Multi-turn conversations should work with OpenAIResponsesModel, just as they do with OpenAIModel.

Actual Behavior

openai.BadRequestError: Error code: 400 - {
'error': {
'message': "Invalid value: 'input_text'. Supported values are: 'output_text' and 'refusal'.",
'type': 'invalid_request_error',
'param': 'input[1].content[0]',
'code': 'invalid_value'
}
}

Additional Context

_format_request_message_content() in openai_responses.py always returns {"type": "input_text", ...} for text
content, regardless of the message role. The OpenAI Responses API requires:

  • input_text for user messages
  • output_text for assistant messages

Since the first turn's assistant response is added to the conversation history with input_text, the second API
call is rejected.

This also affects any agent with tools, since tool calls create assistant messages in the history. For models
like gpt-5.4 that require the Responses API for reasoning_effort + function tools, this makes
OpenAIResponsesModel unusable beyond a single turn.

Version: strands-agents==1.29.0

Possible Solution

Pass the message role through to _format_request_message_content and set the content type accordingly:

In _format_request_messages, pass role to content formatter:

formatted_contents = [
cls._format_request_message_content(content, role=role)
for content in contents
if not any(block_type in content for block_type in ["toolResult", "toolUse"])
]

In _format_request_message_content, use role to pick the type:

if "text" in content:
text_type = "output_text" if role == "assistant" else "input_text"
return {"type": text_type, "text": content["text"]}

Related Issues

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions