-
Notifications
You must be signed in to change notification settings - Fork 6
Description
Description
When creating multiple litellm_model resources in parallel, some randomly fail with the error:
Error: Provider produced inconsistent result after apply
When applying changes to module.litellm.litellm_model.this["deepseek_v3"],
provider "provider[\"registry.terraform.io/berriai/litellm\"]" produced an
unexpected new value: Root object was present, but now absent.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
The models are actually created successfully in LiteLLM - re-running terraform apply shows no changes needed or successfully imports them. This suggests the Create function returns successfully, but the subsequent Read (to verify creation) returns nil/empty.
Environment
- Terraform version: 1.2+
- Provider version: 0.1.0
- LiteLLM version: Latest (with
STORE_MODEL_IN_DB=True)
Steps to Reproduce
- Configure multiple
litellm_modelresources (10+) - Run
terraform apply - Observe that most succeed but 1-3 randomly fail with the above error
- Run
terraform applyagain - the failed resources now succeed or show "no changes"
Example Configuration
resource "litellm_model" "models" {
for_each = {
claude_sonnet_4 = {
model_name = "claude-sonnet-4-20250514"
base_model = "bedrock/global.anthropic.claude-sonnet-4-20250514-v1:0"
}
deepseek_v3 = {
model_name = "deepseek-v3-v1"
base_model = "bedrock/deepseek.v3-v1:0"
}
# ... more models
}
model_name = each.value.model_name
custom_llm_provider = "bedrock"
base_model = each.value.base_model
}Observed Behavior
In a batch of 24 models:
- 21 created successfully
- 3 failed with "Root object was present, but now absent"
- The 3 failed models were actually created in LiteLLM (verified via UI)
- Re-running apply succeeded for all 3
The failing models have no common pattern - different model types fail on different runs.
Relevant Logs
From TF_LOG=TRACE:
[INFO] Successfully read model after 1 attempts <- for models that worked
[TRACE] maybeTainted: module.litellm.litellm_model.this["deepseek_v3"] encountered an error during creation
[TRACE] removing state object for module.litellm.litellm_model.this["deepseek_v3"]
Note: No "Successfully read model" log appears for the failed resources.
Expected Behavior
All models should be created successfully, or the provider should retry the read operation until the model is available.
Suspected Cause
Race condition between model creation and read-back verification. The Create API call succeeds, but the immediate Read to verify returns empty/nil - possibly due to:
- Eventual consistency in the LiteLLM database
- The read occurring before the model is fully committed
- Insufficient retry attempts or backoff in the provider's read-after-create logic
Workarounds
- Re-run
terraform apply- works because model exists - Use
terraform apply -parallelism=1to serialize operations (slower but reliable)