Skip to content

Commit ae3c810

Browse files
docs: address review comments - rephrase heading, slim heartbeats cross-reference
Co-Authored-By: ian.alton@airbyte.io <ian.alton@airbyte.io>
1 parent 28c3ac7 commit ae3c810

File tree

2 files changed

+2
-8
lines changed

2 files changed

+2
-8
lines changed

docs/platform/understanding-airbyte/heartbeats.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -70,10 +70,4 @@ The timeout can be configured using the file `flags.yaml` through 2 entries:
7070

7171
## Related: Workload Monitor
7272

73-
The heartbeat mechanisms described on this page monitor **connector-level responsiveness** within a running sync — i.e. whether the source is emitting records and whether the destination is responding to write calls.
74-
75-
Airbyte also has a separate **platform-level Workload Monitor** that checks whether the workload pod itself is alive and progressing through its lifecycle (pending → claimed → launched → running). If the pod crashes, is OOM-killed, or never starts, the Workload Monitor fails the workload with the message:
76-
77-
> _"Airbyte could not track the sync progress. Sync process exited without reporting status."_
78-
79-
This error is surfaced as a `WorkloadMonitorException` and is distinct from the source/destination heartbeat errors described above. For details on how the Workload Monitor works and how to debug these errors, see [Workload Monitor](./jobs.md#workload-monitor).
73+
The heartbeat mechanisms described on this page monitor connector-level responsiveness within a running sync. Airbyte also has a separate platform-level monitor that checks whether the workload pod itself is alive and progressing. For details, see [Workload Monitor](./jobs.md#workload-monitor).

docs/platform/understanding-airbyte/jobs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ When a workload is picked up by the Launcher, it passes through the following pi
7979

8080
After the LAUNCH stage completes, the pipeline's success handler transitions the workload status to **LAUNCHED** via the Workload API.
8181

82-
#### Why is there a delay between LAUNCH and LAUNCHED?
82+
#### LAUNCH to LAUNCHED delay
8383

8484
The time between the `APPLY Stage: LAUNCH` log line and the `Attempting to update workload ... to LAUNCHED` log line is the time Kubernetes takes to accept and begin scheduling the pod. In most cases this is seconds, but it can be significantly longer when:
8585

0 commit comments

Comments
 (0)