Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
de3086b to
c6adfbc
Compare
c6adfbc to
9a242eb
Compare
e99dcf2 to
38b4ac2
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e99dcf204e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/server/modules/agents/agents-manager.ts (1)
153-162:⚠️ Potential issue | 🟠 MajorClear the stopped manager before starting a replacement.
After
stop()returns,runtime.agentManagerstill points at the old instance until the new manager finishes starting. IfnextAgentManager.start()throws, laterrunBackup()calls will try to use a dead manager instead of returningunavailable.Suggested fix
export const startAgentRuntime = async () => { const runtime = getAgentRuntimeState(); if (runtime.agentManager) { await runtime.agentManager.stop(); + runtime.agentManager = null; } const { createAgentManagerRuntime } = await import("./controller/server"); const nextAgentManager = createAgentManagerRuntime(); nextAgentManager.setBackupEventHandlers(backupEventHandlers);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/server/modules/agents/agents-manager.ts` around lines 153 - 162, After awaiting runtime.agentManager.stop(), clear the reference (set runtime.agentManager to undefined/null) before creating/starting the replacement so callers like runBackup() don't see a stopped manager; then create the new manager via createAgentManagerRuntime(), set its backup handlers, await nextAgentManager.start(), and only after successful start assign runtime.agentManager = nextAgentManager to avoid leaving a dead instance if start() throws.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/server/modules/agents/__tests__/agents-manager.test.ts`:
- Around line 11-12: The tests never exercise the spawn path because the module
import happens against an uninitialized singleton; before dynamically importing
spawnLocalAgent/stopLocalAgent, reset module cache and initialize the same
runtime singleton used by agents-manager.backups.test.ts so the lifecycle path
is seeded correctly. Concretely: call jest.resetModules(), run the same
setup/seeding helper (or replicate its init logic) that
agents-manager.backups.test.ts uses to seed the runtime singleton, then
dynamically import { spawnLocalAgent, stopLocalAgent } so spawn() is invoked
against the proper runtime instance.
In `@app/server/modules/agents/agents-manager.ts`:
- Around line 70-99: The function requestBackupCancellation sets
activeBackupRun.cancellationRequested before awaiting runtime.cancelBackup, so
if runtime.cancelBackup rejects the active run remains registered and its
completion promise can hang; fix it by wrapping the await
runtime.cancelBackup(...) call in a try/catch inside requestBackupCancellation
(use the existing activeBackupRun and runtime variables), and in the catch call
resolveActiveBackupRun(scheduleId, { status: "cancelled" }) and return true so
the active run is cleaned up; keep the cancellationRequested flag semantics but
ensure cleanup happens on rejection of runtime.cancelBackup.
In `@app/server/modules/agents/controller/session.ts`:
- Around line 149-152: The current check treats socket.send() <= 0 as a fatal
send failure and closes the session; change it so only a return value of 0
triggers handleSendFailure("connection issue") (i.e., close), while -1 is
handled as backpressure (call handleSendFailure("backpressure") or a non-fatal
path) so agents are not disconnected when the frame is enqueued; update the
conditional around socket.send(message) in the session send logic (reference
socket.send and handleSendFailure) to distinguish 0 vs -1 and ensure -1 does not
close the session.
In `@app/server/modules/agents/helpers/runtime-state.dev.ts`:
- Around line 14-18: hydrateAgentRuntimeState currently uses nullish coalescing
(??) so non-null but invalid values (e.g. {} from hot reload) are preserved;
change it to validate types instead: for both activeBackupsByScheduleId and
activeBackupScheduleIdsByJobId, replace the nullish checks with type checks
(e.g. runtime.activeBackupsByScheduleId instanceof Map ?
runtime.activeBackupsByScheduleId : new Map()) so only actual Map instances are
kept, aligning behavior with hasActiveBackupMaps and preventing runtime
.get()/.set() failures when legacy fields are present but not Maps.
In `@app/test/helpers/agent-mock.ts`:
- Around line 36-77: The IIFE swallowing errors with catch(() => {}) causes
runBackupMock callers to hang when resticBackupMock rejects; update the catch to
forward the failure: inside the catch handler for the IIFE, call the outer
resolve with a failed result (e.g. resolve({ status: "failed", error:
String(err) })) and clean up runningBackups (delete(request.scheduleId)) so the
outer promise is settled on error; reference the resticBackupMock call and the
existing resolve variable to implement this change.
---
Outside diff comments:
In `@app/server/modules/agents/agents-manager.ts`:
- Around line 153-162: After awaiting runtime.agentManager.stop(), clear the
reference (set runtime.agentManager to undefined/null) before creating/starting
the replacement so callers like runBackup() don't see a stopped manager; then
create the new manager via createAgentManagerRuntime(), set its backup handlers,
await nextAgentManager.start(), and only after successful start assign
runtime.agentManager = nextAgentManager to avoid leaving a dead instance if
start() throws.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 54f4a332-e766-44ed-be04-9e7266edb9d3
📒 Files selected for processing (19)
app/server/modules/agents/__tests__/agents-manager.backups.test.tsapp/server/modules/agents/__tests__/agents-manager.test.tsapp/server/modules/agents/__tests__/session.test.tsapp/server/modules/agents/agents-manager.tsapp/server/modules/agents/controller/session.tsapp/server/modules/agents/helpers/runtime-state.dev.tsapp/server/modules/agents/helpers/runtime-state.tsapp/server/modules/agents/local/process.tsapp/server/modules/backups/__tests__/backups.service.execution.test.tsapp/server/modules/backups/__tests__/backups.service.test.tsapp/server/modules/backups/backup-executor.tsapp/server/modules/backups/backups.execution.tsapp/server/modules/backups/backups.service.tsapp/server/modules/backups/helpers/backup-lifecycle.tsapp/test/helpers/agent-mock.tsapps/agent/src/__tests__/controller-session.test.tsapps/agent/src/controller-session.tsapps/agent/src/index.tsapps/agent/tsconfig.json
💤 Files with no reviewable changes (1)
- app/server/modules/backups/backups.execution.ts
| const { spawnLocalAgent, stopLocalAgent } = await import("../agents-manager"); | ||
|
|
There was a problem hiding this comment.
These tests are red because the spawn path never runs.
CI already shows both assertions getting zero spawn() calls, so this harness is not exercising the lifecycle path it is asserting on. Please fix the setup before merge—likely by initializing/resetting the same runtime singleton that app/server/modules/agents/__tests__/agents-manager.backups.test.ts seeds—otherwise this file keeps the suite failing.
Also applies to: 44-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/server/modules/agents/__tests__/agents-manager.test.ts` around lines 11 -
12, The tests never exercise the spawn path because the module import happens
against an uninitialized singleton; before dynamically importing
spawnLocalAgent/stopLocalAgent, reset module cache and initialize the same
runtime singleton used by agents-manager.backups.test.ts so the lifecycle path
is seeded correctly. Concretely: call jest.resetModules(), run the same
setup/seeding helper (or replicate its init logic) that
agents-manager.backups.test.ts uses to seed the runtime singleton, then
dynamically import { spawnLocalAgent, stopLocalAgent } so spawn() is invoked
against the proper runtime instance.
| const requestBackupCancellation = async (agentId: string, scheduleId: number) => { | ||
| const activeBackupRun = getActiveBackupsByScheduleId().get(scheduleId); | ||
| if (!activeBackupRun) { | ||
| return false; | ||
| } | ||
|
|
||
| if (activeBackupRun.cancellationRequested) { | ||
| return true; | ||
| } | ||
|
|
||
| activeBackupRun.cancellationRequested = true; | ||
|
|
||
| const runtime = getAgentManagerRuntime(); | ||
| if (!runtime) { | ||
| resolveActiveBackupRun(scheduleId, { status: "cancelled" }); | ||
| return true; | ||
| } | ||
|
|
||
| if ( | ||
| await runtime.cancelBackup(agentId, { | ||
| jobId: activeBackupRun.jobId, | ||
| scheduleId: activeBackupRun.scheduleShortId, | ||
| }) | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| resolveActiveBackupRun(scheduleId, { status: "cancelled" }); | ||
| return true; | ||
| }; |
There was a problem hiding this comment.
Resolve locally when cancelBackup() rejects.
At Line 80, cancellationRequested is set before awaiting runtime.cancelBackup(). If that promise rejects, the active run stays registered and later calls hit the early return at Line 76, so the original completion promise can remain pending forever.
Suggested fix
activeBackupRun.cancellationRequested = true;
const runtime = getAgentManagerRuntime();
if (!runtime) {
resolveActiveBackupRun(scheduleId, { status: "cancelled" });
return true;
}
- if (
- await runtime.cancelBackup(agentId, {
- jobId: activeBackupRun.jobId,
- scheduleId: activeBackupRun.scheduleShortId,
- })
- ) {
- return true;
- }
+ try {
+ if (
+ await runtime.cancelBackup(agentId, {
+ jobId: activeBackupRun.jobId,
+ scheduleId: activeBackupRun.scheduleShortId,
+ })
+ ) {
+ return true;
+ }
+ } catch (error) {
+ logger.warn(
+ `Failed to cancel backup ${activeBackupRun.jobId} for schedule ${scheduleId}: ${
+ error instanceof Error ? error.message : String(error)
+ }`,
+ );
+ }
resolveActiveBackupRun(scheduleId, { status: "cancelled" });
return true;
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/server/modules/agents/agents-manager.ts` around lines 70 - 99, The
function requestBackupCancellation sets activeBackupRun.cancellationRequested
before awaiting runtime.cancelBackup, so if runtime.cancelBackup rejects the
active run remains registered and its completion promise can hang; fix it by
wrapping the await runtime.cancelBackup(...) call in a try/catch inside
requestBackupCancellation (use the existing activeBackupRun and runtime
variables), and in the catch call resolveActiveBackupRun(scheduleId, { status:
"cancelled" }) and return true so the active run is cleaned up; keep the
cancellationRequested flag semantics but ensure cleanup happens on rejection of
runtime.cancelBackup.
| const sendResult = socket.send(message); | ||
| if (sendResult <= 0) { | ||
| handleSendFailure(sendResult === 0 ? "connection issue" : "backpressure"); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file and the context around lines 149-152
cd app/server/modules/agents/controller && wc -l session.ts && sed -n '140,160p' session.tsRepository: nicotsx/zerobyte
Length of output: 588
🏁 Script executed:
# Search for ServerWebSocket usage and send() calls in the file to understand context
rg "socket\.send|handleSendFailure|closeSession" app/server/modules/agents/controller/session.ts -B 2 -A 2Repository: nicotsx/zerobyte
Length of output: 921
🌐 Web query:
Bun ServerWebSocket.send() return value semantics documentation
💡 Result:
The Bun ServerWebSocket.send method returns a number indicating the result of the send operation, specifically for backpressure handling: - A value of 1 or greater (1+) represents the number of bytes successfully sent. - -1 means the message was enqueued but there is backpressure (the send buffer is full, and the message is queued for later transmission). - 0 means the message was dropped due to a connection issue (e.g., the connection is closed or unhealthy). This return value allows developers to handle backpressure explicitly, such as by pausing message sending when -1 is returned and resuming on the 'drain' event. Unlike the browser/client WebSocket.send which returns void/undefined, Bun's server-side implementation provides this numeric status for performance and reliability in server scenarios. Signature: send(data: string | BufferSource, compress?: boolean): number
Citations:
- 1: https://bun.sh/reference/bun/ServerWebSocket
- 2: https://bun.sh/docs/api/websockets
- 3: https://bun.com/reference/bun/WebSocketHandler/message
- 4: https://www.mintlify.com/oven-sh/bun/runtime/websockets
- 5: https://bun.com/reference/bun/ServerWebSocket
- 6: https://bun.com/docs/runtime/http/websockets
Don't close the session on Bun backpressure.
ServerWebSocket.send() returns -1 when the frame was enqueued under backpressure (the send buffer is full but the message is queued for later transmission), and 0 only when it was dropped because of a connection issue. Closing the session for all <= 0 results means a healthy agent experiencing buffer pressure will be disconnected and its backup cancelled even though the message was accepted. Restrict the failure path to 0, and handle -1 as backpressure instead.
Suggested fix
const sendResult = socket.send(message);
- if (sendResult <= 0) {
- handleSendFailure(sendResult === 0 ? "connection issue" : "backpressure");
- }
+ if (sendResult === 0) {
+ handleSendFailure("connection issue");
+ } else if (sendResult < 0) {
+ logger.warn(
+ `Backpressure while sending to agent ${socket.data.agentId} on ${socket.data.id}; keeping session open`,
+ );
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const sendResult = socket.send(message); | |
| if (sendResult <= 0) { | |
| handleSendFailure(sendResult === 0 ? "connection issue" : "backpressure"); | |
| } | |
| const sendResult = socket.send(message); | |
| if (sendResult === 0) { | |
| handleSendFailure("connection issue"); | |
| } else if (sendResult < 0) { | |
| logger.warn( | |
| `Backpressure while sending to agent ${socket.data.agentId} on ${socket.data.id}; keeping session open`, | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/server/modules/agents/controller/session.ts` around lines 149 - 152, The
current check treats socket.send() <= 0 as a fatal send failure and closes the
session; change it so only a return value of 0 triggers
handleSendFailure("connection issue") (i.e., close), while -1 is handled as
backpressure (call handleSendFailure("backpressure") or a non-fatal path) so
agents are not disconnected when the frame is enqueued; update the conditional
around socket.send(message) in the session send logic (reference socket.send and
handleSendFailure) to distinguish 0 vs -1 and ensure -1 does not close the
session.
| const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({ | ||
| ...runtime, | ||
| activeBackupsByScheduleId: runtime.activeBackupsByScheduleId ?? new Map(), | ||
| activeBackupScheduleIdsByJobId: runtime.activeBackupScheduleIdsByJobId ?? new Map(), | ||
| }); |
There was a problem hiding this comment.
Hydrate legacy map fields by type, not by nullishness.
hasActiveBackupMaps() only reaches this branch when at least one field is missing or not a Map, but ?? preserves non-nullish invalid values. A hot-reloaded runtime like { activeBackupsByScheduleId: {} } will still be returned as AgentRuntimeState, and the next .get() / .set() will fail at runtime.
Suggested fix
const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({
...runtime,
- activeBackupsByScheduleId: runtime.activeBackupsByScheduleId ?? new Map(),
- activeBackupScheduleIdsByJobId: runtime.activeBackupScheduleIdsByJobId ?? new Map(),
+ activeBackupsByScheduleId:
+ runtime.activeBackupsByScheduleId instanceof Map ? runtime.activeBackupsByScheduleId : new Map(),
+ activeBackupScheduleIdsByJobId:
+ runtime.activeBackupScheduleIdsByJobId instanceof Map
+ ? runtime.activeBackupScheduleIdsByJobId
+ : new Map(),
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({ | |
| ...runtime, | |
| activeBackupsByScheduleId: runtime.activeBackupsByScheduleId ?? new Map(), | |
| activeBackupScheduleIdsByJobId: runtime.activeBackupScheduleIdsByJobId ?? new Map(), | |
| }); | |
| const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({ | |
| ...runtime, | |
| activeBackupsByScheduleId: | |
| runtime.activeBackupsByScheduleId instanceof Map ? runtime.activeBackupsByScheduleId : new Map(), | |
| activeBackupScheduleIdsByJobId: | |
| runtime.activeBackupScheduleIdsByJobId instanceof Map | |
| ? runtime.activeBackupScheduleIdsByJobId | |
| : new Map(), | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/server/modules/agents/helpers/runtime-state.dev.ts` around lines 14 - 18,
hydrateAgentRuntimeState currently uses nullish coalescing (??) so non-null but
invalid values (e.g. {} from hot reload) are preserved; change it to validate
types instead: for both activeBackupsByScheduleId and
activeBackupScheduleIdsByJobId, replace the nullish checks with type checks
(e.g. runtime.activeBackupsByScheduleId instanceof Map ?
runtime.activeBackupsByScheduleId : new Map()) so only actual Map instances are
kept, aligning behavior with hasActiveBackupMaps and preventing runtime
.get()/.set() failures when legacy fields are present but not Maps.
| void (async () => { | ||
| const stderrLines: string[] = []; | ||
| const result = await resticBackupMock( | ||
| fromPartial<SafeSpawnParams>({ | ||
| signal: request.signal, | ||
| onStderr: (line: string) => { | ||
| stderrLines.push(line); | ||
| }, | ||
| }), | ||
| ); | ||
| const running = runningBackups.get(request.scheduleId); | ||
| if (!running || running.cancelled) { | ||
| return; | ||
| } | ||
|
|
||
| runningBackups.delete(request.scheduleId); | ||
|
|
||
| if (result.exitCode === 0 || result.exitCode === 3) { | ||
| let parsedResult: Record<string, unknown> | null = null; | ||
| if (result.summary) { | ||
| try { | ||
| parsedResult = JSON.parse(result.summary) as Record<string, unknown>; | ||
| } catch { | ||
| parsedResult = null; | ||
| } | ||
| } | ||
|
|
||
| resolve({ | ||
| status: "completed", | ||
| exitCode: result.exitCode, | ||
| result: fromAny(parsedResult), | ||
| warningDetails: stderrLines.join("\n") || null, | ||
| }); | ||
| return; | ||
| } | ||
|
|
||
| const resultWithStderr = result as typeof result & { stderr?: string }; | ||
| resolve({ | ||
| status: "failed", | ||
| error: stderrLines.join("\n") || resultWithStderr.stderr || result.error, | ||
| }); | ||
| })().catch(() => {}); |
There was a problem hiding this comment.
Rejected mock backups currently hang the test instead of failing.
If resticBackupMock() rejects, the catch(() => {}) at Line 77 drops the error and leaves the outer promise unresolved. Anything awaiting runBackupMock() will sit until the test times out.
Suggested fix
- void (async () => {
+ void (async () => {
const stderrLines: string[] = [];
const result = await resticBackupMock(
fromPartial<SafeSpawnParams>({
signal: request.signal,
onStderr: (line: string) => {
stderrLines.push(line);
@@
- })().catch(() => {});
+ })().catch((error) => {
+ runningBackups.delete(request.scheduleId);
+ resolve({
+ status: "failed",
+ error: error instanceof Error ? error.message : String(error),
+ });
+ });
});
},
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| void (async () => { | |
| const stderrLines: string[] = []; | |
| const result = await resticBackupMock( | |
| fromPartial<SafeSpawnParams>({ | |
| signal: request.signal, | |
| onStderr: (line: string) => { | |
| stderrLines.push(line); | |
| }, | |
| }), | |
| ); | |
| const running = runningBackups.get(request.scheduleId); | |
| if (!running || running.cancelled) { | |
| return; | |
| } | |
| runningBackups.delete(request.scheduleId); | |
| if (result.exitCode === 0 || result.exitCode === 3) { | |
| let parsedResult: Record<string, unknown> | null = null; | |
| if (result.summary) { | |
| try { | |
| parsedResult = JSON.parse(result.summary) as Record<string, unknown>; | |
| } catch { | |
| parsedResult = null; | |
| } | |
| } | |
| resolve({ | |
| status: "completed", | |
| exitCode: result.exitCode, | |
| result: fromAny(parsedResult), | |
| warningDetails: stderrLines.join("\n") || null, | |
| }); | |
| return; | |
| } | |
| const resultWithStderr = result as typeof result & { stderr?: string }; | |
| resolve({ | |
| status: "failed", | |
| error: stderrLines.join("\n") || resultWithStderr.stderr || result.error, | |
| }); | |
| })().catch(() => {}); | |
| void (async () => { | |
| const stderrLines: string[] = []; | |
| const result = await resticBackupMock( | |
| fromPartial<SafeSpawnParams>({ | |
| signal: request.signal, | |
| onStderr: (line: string) => { | |
| stderrLines.push(line); | |
| }, | |
| }), | |
| ); | |
| const running = runningBackups.get(request.scheduleId); | |
| if (!running || running.cancelled) { | |
| return; | |
| } | |
| runningBackups.delete(request.scheduleId); | |
| if (result.exitCode === 0 || result.exitCode === 3) { | |
| let parsedResult: Record<string, unknown> | null = null; | |
| if (result.summary) { | |
| try { | |
| parsedResult = JSON.parse(result.summary) as Record<string, unknown>; | |
| } catch { | |
| parsedResult = null; | |
| } | |
| } | |
| resolve({ | |
| status: "completed", | |
| exitCode: result.exitCode, | |
| result: fromAny(parsedResult), | |
| warningDetails: stderrLines.join("\n") || null, | |
| }); | |
| return; | |
| } | |
| const resultWithStderr = result as typeof result & { stderr?: string }; | |
| resolve({ | |
| status: "failed", | |
| error: stderrLines.join("\n") || resultWithStderr.stderr || result.error, | |
| }); | |
| })().catch((error) => { | |
| runningBackups.delete(request.scheduleId); | |
| resolve({ | |
| status: "failed", | |
| error: error instanceof Error ? error.message : String(error), | |
| }); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/test/helpers/agent-mock.ts` around lines 36 - 77, The IIFE swallowing
errors with catch(() => {}) causes runBackupMock callers to hang when
resticBackupMock rejects; update the catch to forward the failure: inside the
catch handler for the IIFE, call the outer resolve with a failed result (e.g.
resolve({ status: "failed", error: String(err) })) and clean up runningBackups
(delete(request.scheduleId)) so the outer promise is settled on error; reference
the resticBackupMock call and the existing resolve variable to implement this
change.

Summary by CodeRabbit
New Features
Bug Fixes
Tests