Is your feature request related to a problem?
When you run an evaluation using a config with the GPT-4 Turbo model, the error message isn’t clear. It’s hard to tell what went wrong, which can confuse users and interrupt the workflow.
To Reproduce
Steps to reproduce the behavior:
- Go to Config and create a new version using the GPT-4 Turbo model
- Go to Evaluation → Text
- Open the Evaluation tab
- Add a name, pick a config, and select a dataset
- Click “Run evaluation” — an error shows up in the right sidebar
Expected behavior
Running evaluation with Config should run successfully
Screenshots

Is your feature request related to a problem?
When you run an evaluation using a config with the GPT-4 Turbo model, the error message isn’t clear. It’s hard to tell what went wrong, which can confuse users and interrupt the workflow.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Running evaluation with Config should run successfully
Screenshots