feat(iswf): Adds silenced_exceptions parameter to tasks, exposes this and report_timeout_errors in task registration#608
feat(iswf): Adds silenced_exceptions parameter to tasks, exposes this and report_timeout_errors in task registration#608GabeVillalobos wants to merge 7 commits intomainfrom
Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 7aa4820. Configure here.
Can this state just cease to exist? There is a single usage of it in sentry. |
abc6bda to
cfe0c37
Compare
| at_most_once=at_most_once, | ||
| wait_for_delivery=wait_for_delivery, | ||
| compression_type=compression_type, | ||
| report_timeout_errors=report_timeout_errors, | ||
| expected_exceptions=expected_exceptions, | ||
| ) | ||
| self._registered_tasks[name] = task | ||
| return task |
There was a problem hiding this comment.
Bug: The new expected_exceptions and report_timeout_errors parameters on ExternalNamespace.register() have no effect because ExternalTask instances are not executed locally.
Severity: MEDIUM
Suggested Fix
Raise a TypeError or ValueError if expected_exceptions or report_timeout_errors are passed to ExternalNamespace.register(), as these parameters are not supported for external tasks. Alternatively, document this limitation clearly in the method's docstring to prevent misuse.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's
not valid.
Location: clients/python/src/taskbroker_client/registry.py#L283-L290
Potential issue: The `ExternalNamespace.register()` method now accepts
`expected_exceptions` and `report_timeout_errors` parameters. However, these are
silently ignored for external tasks. These parameters control task execution behavior,
but `ExternalTask` instances are only dispatched, not executed, by the local worker; the
remote application's task registration governs error handling. A developer setting
`expected_exceptions` on an external task stub will incorrectly believe they have
suppressed certain error reports, when in fact the setting has no effect at runtime.
Did we get this right? 👍 / 👎 to inform future reviews.
@markstory Yeah we can. I think this change still make sense, excluding that use case, since we won't always want to be notified if we retry a retriable exception. |
evanh
left a comment
There was a problem hiding this comment.
Could you add some tests for this behaviour?
cfe0c37 to
24f66a6
Compare
|
@evanh Added some testing. I think this should cover the major use cases we care about, but can add more if needed. |
What's a real world scenario for letting a task raise an exception, do nothing about that and consider it successful ? |
| wait_for_delivery: bool = False, | ||
| compression_type: CompressionType = CompressionType.PLAINTEXT, | ||
| report_timeout_errors: bool = True, | ||
| expected_exceptions: tuple[type[BaseException], ...] | None = None, |
There was a problem hiding this comment.
Isn't this naming a bit of a contradiction? "expected exception"
What about silenced_exceptions
There was a problem hiding this comment.
The original name for this was exceptions_to_silence 😅 I do think this is a bit more appropriate. expected is a bit vague in what it's implying. I'll update the name again.
@fpacifici I largely agree. This seems to be a task definition shorthand to avoiding writing a top-level try/catch, though this PR doesn't introduce this behavior. If we want to deprecate this in favor of slightly more verbose task handlers, I'm in favor of doing this in a separate PR. |
| except Exception as err: | ||
| retry = task_func.retry | ||
| captured_error = False | ||
| should_capture_error = not isinstance(err, task_func.silenced_exceptions) | ||
| if retry: | ||
| if retry.should_retry(inflight.activation.retry_state, err): | ||
| logger.info( |
There was a problem hiding this comment.
Bug: The silenced_exceptions parameter does not work for ProcessingDeadlineExceeded because it is caught by an earlier, more specific except block that ignores this parameter.
Severity: MEDIUM
Suggested Fix
Modify the except ProcessingDeadlineExceeded block to also check if the exception type is present in the silenced_exceptions tuple. If it is, the exception should be suppressed according to the intended logic. This will align the behavior with the API's documented purpose.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's
not valid.
Location: clients/python/src/taskbroker_client/worker/workerchild.py#L266-L272
Potential issue: The `silenced_exceptions` parameter has no effect when
`ProcessingDeadlineExceeded` is included in it. This is because
`ProcessingDeadlineExceeded` inherits from `BaseException` and is caught by a dedicated
`except ProcessingDeadlineExceeded` block. This earlier block does not check the
`silenced_exceptions` list. As a result, the general `except Exception` block, which
contains the logic for `silenced_exceptions`, is never reached for this specific
exception. This leads to a misleading API contract where the attempt to silence
`ProcessingDeadlineExceeded` fails without any indication.
There was a problem hiding this comment.
This is intentional. report_timeout_errors controls whether or not we report ProcessingDeadlineExceeded errors to Sentry.

Exposes the previously added
report_timeout_errorsvia task registration method.Adds a new
silenced_exceptionsparameter, which should put us at feature parity with the remainder of fields in@retry:To emulate each of these previous parameters:
on
on_silent
exclude
ignore
ignore_and_capture
.... It's actually not entirely clear to me how this is supposed to behave compared to
exclude