-
Notifications
You must be signed in to change notification settings - Fork 697
Description
Checks
- I have updated to the lastest minor and patch version of Strands
- I have checked the documentation and this is not expected behavior
- I have searched ./issues and there are no duplicates of my issue
Strands Version
1.29.0
Python Version
3.11
Operating System
Debian 11
Installation Method
pip
Steps to Reproduce
- Create a BedrockModel (us.anthropic.claude-sonnet-4-6) with cache_config=CacheConfig(strategy="auto")
- Send messages that contain assistant messages ending with reasoningContent blocks
- Invoke the model's stream method
- Observe Bedrock API validation error: "Cache point cannot be inserted after reasoning block"
Expected Behavior
Cache points should be inserted before reasoning blocks to comply with Bedrock's API constraints, allowing successful model invocation with automatic caching enabled.
Actual Behavior
The _inject_cache_point method appends cache points to the end of assistant messages without checking if the last content block is a reasoning block, causing API validation failures when reasoning content is present.
Additional Context
The auto-injection occurs in _format_bedrock_messages when cache_config.strategy == "auto" . The ReasoningContentBlock type contains reasoningText and redactedContent fields that must come before any cache points.
Possible Solution
Modify _inject_cache_point to detect reasoning content blocks and insert cache points before them:
#Check if the last block is reasoningContent if content and "reasoningContent" in content[-1]: # Insert cache point before the reasoning block content.insert(len(content) - 1, {"cachePoint": {"type": "default"}}) else: # Append cache point at the end (original behavior) content.append({"cachePoint": {"type": "default"}})
This ensures cache points are positioned correctly while maintaining backward compatibility for messages without reasoning blocks.
Related Issues
No response