Skip to content

feat(agent): add standalone agent runtime#776

Open
nicotsx wants to merge 10 commits intomainfrom
04-07-feat-backup-execution-via-agent
Open

feat(agent): add standalone agent runtime#776
nicotsx wants to merge 10 commits intomainfrom
04-07-feat-backup-execution-via-agent

Conversation

@nicotsx
Copy link
Copy Markdown
Owner

@nicotsx nicotsx commented Apr 10, 2026

Summary by CodeRabbit

  • New Features

    • Added backup cancellation support, allowing users to stop in-progress backups.
  • Bug Fixes

    • Improved local agent reliability with proper restart handling and error recovery.
    • Enhanced socket connection handling to prevent session hangs on send failures.
    • Fixed backup event tracking for more accurate status reporting.
  • Tests

    • Added comprehensive test coverage for backup cancellation and agent lifecycle management.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 10, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 1bb0c38e-3d64-424b-ba9a-345e9367eecc

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Owner Author

nicotsx commented Apr 10, 2026

This stack of pull requests is managed by Graphite. Learn more about stacking.

@nicotsx nicotsx force-pushed the 04-07-feat-backup-execution-via-agent branch from de3086b to c6adfbc Compare April 12, 2026 12:49
@nicotsx nicotsx force-pushed the 04-07-feat-backup-execution-via-agent branch from c6adfbc to 9a242eb Compare April 12, 2026 13:41
@nicotsx nicotsx marked this pull request as ready for review April 12, 2026 14:32
@nicotsx nicotsx force-pushed the 04-07-feat-backup-execution-via-agent branch from e99dcf2 to 38b4ac2 Compare April 12, 2026 14:39
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e99dcf204e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/server/modules/agents/agents-manager.ts (1)

153-162: ⚠️ Potential issue | 🟠 Major

Clear the stopped manager before starting a replacement.

After stop() returns, runtime.agentManager still points at the old instance until the new manager finishes starting. If nextAgentManager.start() throws, later runBackup() calls will try to use a dead manager instead of returning unavailable.

Suggested fix
 export const startAgentRuntime = async () => {
 	const runtime = getAgentRuntimeState();
 
 	if (runtime.agentManager) {
 		await runtime.agentManager.stop();
+		runtime.agentManager = null;
 	}
 
 	const { createAgentManagerRuntime } = await import("./controller/server");
 	const nextAgentManager = createAgentManagerRuntime();
 	nextAgentManager.setBackupEventHandlers(backupEventHandlers);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/server/modules/agents/agents-manager.ts` around lines 153 - 162, After
awaiting runtime.agentManager.stop(), clear the reference (set
runtime.agentManager to undefined/null) before creating/starting the replacement
so callers like runBackup() don't see a stopped manager; then create the new
manager via createAgentManagerRuntime(), set its backup handlers, await
nextAgentManager.start(), and only after successful start assign
runtime.agentManager = nextAgentManager to avoid leaving a dead instance if
start() throws.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/server/modules/agents/__tests__/agents-manager.test.ts`:
- Around line 11-12: The tests never exercise the spawn path because the module
import happens against an uninitialized singleton; before dynamically importing
spawnLocalAgent/stopLocalAgent, reset module cache and initialize the same
runtime singleton used by agents-manager.backups.test.ts so the lifecycle path
is seeded correctly. Concretely: call jest.resetModules(), run the same
setup/seeding helper (or replicate its init logic) that
agents-manager.backups.test.ts uses to seed the runtime singleton, then
dynamically import { spawnLocalAgent, stopLocalAgent } so spawn() is invoked
against the proper runtime instance.

In `@app/server/modules/agents/agents-manager.ts`:
- Around line 70-99: The function requestBackupCancellation sets
activeBackupRun.cancellationRequested before awaiting runtime.cancelBackup, so
if runtime.cancelBackup rejects the active run remains registered and its
completion promise can hang; fix it by wrapping the await
runtime.cancelBackup(...) call in a try/catch inside requestBackupCancellation
(use the existing activeBackupRun and runtime variables), and in the catch call
resolveActiveBackupRun(scheduleId, { status: "cancelled" }) and return true so
the active run is cleaned up; keep the cancellationRequested flag semantics but
ensure cleanup happens on rejection of runtime.cancelBackup.

In `@app/server/modules/agents/controller/session.ts`:
- Around line 149-152: The current check treats socket.send() <= 0 as a fatal
send failure and closes the session; change it so only a return value of 0
triggers handleSendFailure("connection issue") (i.e., close), while -1 is
handled as backpressure (call handleSendFailure("backpressure") or a non-fatal
path) so agents are not disconnected when the frame is enqueued; update the
conditional around socket.send(message) in the session send logic (reference
socket.send and handleSendFailure) to distinguish 0 vs -1 and ensure -1 does not
close the session.

In `@app/server/modules/agents/helpers/runtime-state.dev.ts`:
- Around line 14-18: hydrateAgentRuntimeState currently uses nullish coalescing
(??) so non-null but invalid values (e.g. {} from hot reload) are preserved;
change it to validate types instead: for both activeBackupsByScheduleId and
activeBackupScheduleIdsByJobId, replace the nullish checks with type checks
(e.g. runtime.activeBackupsByScheduleId instanceof Map ?
runtime.activeBackupsByScheduleId : new Map()) so only actual Map instances are
kept, aligning behavior with hasActiveBackupMaps and preventing runtime
.get()/.set() failures when legacy fields are present but not Maps.

In `@app/test/helpers/agent-mock.ts`:
- Around line 36-77: The IIFE swallowing errors with catch(() => {}) causes
runBackupMock callers to hang when resticBackupMock rejects; update the catch to
forward the failure: inside the catch handler for the IIFE, call the outer
resolve with a failed result (e.g. resolve({ status: "failed", error:
String(err) })) and clean up runningBackups (delete(request.scheduleId)) so the
outer promise is settled on error; reference the resticBackupMock call and the
existing resolve variable to implement this change.

---

Outside diff comments:
In `@app/server/modules/agents/agents-manager.ts`:
- Around line 153-162: After awaiting runtime.agentManager.stop(), clear the
reference (set runtime.agentManager to undefined/null) before creating/starting
the replacement so callers like runBackup() don't see a stopped manager; then
create the new manager via createAgentManagerRuntime(), set its backup handlers,
await nextAgentManager.start(), and only after successful start assign
runtime.agentManager = nextAgentManager to avoid leaving a dead instance if
start() throws.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 54f4a332-e766-44ed-be04-9e7266edb9d3

📥 Commits

Reviewing files that changed from the base of the PR and between b25ccdc and 38b4ac2.

📒 Files selected for processing (19)
  • app/server/modules/agents/__tests__/agents-manager.backups.test.ts
  • app/server/modules/agents/__tests__/agents-manager.test.ts
  • app/server/modules/agents/__tests__/session.test.ts
  • app/server/modules/agents/agents-manager.ts
  • app/server/modules/agents/controller/session.ts
  • app/server/modules/agents/helpers/runtime-state.dev.ts
  • app/server/modules/agents/helpers/runtime-state.ts
  • app/server/modules/agents/local/process.ts
  • app/server/modules/backups/__tests__/backups.service.execution.test.ts
  • app/server/modules/backups/__tests__/backups.service.test.ts
  • app/server/modules/backups/backup-executor.ts
  • app/server/modules/backups/backups.execution.ts
  • app/server/modules/backups/backups.service.ts
  • app/server/modules/backups/helpers/backup-lifecycle.ts
  • app/test/helpers/agent-mock.ts
  • apps/agent/src/__tests__/controller-session.test.ts
  • apps/agent/src/controller-session.ts
  • apps/agent/src/index.ts
  • apps/agent/tsconfig.json
💤 Files with no reviewable changes (1)
  • app/server/modules/backups/backups.execution.ts

Comment on lines +11 to +12
const { spawnLocalAgent, stopLocalAgent } = await import("../agents-manager");

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

These tests are red because the spawn path never runs.

CI already shows both assertions getting zero spawn() calls, so this harness is not exercising the lifecycle path it is asserting on. Please fix the setup before merge—likely by initializing/resetting the same runtime singleton that app/server/modules/agents/__tests__/agents-manager.backups.test.ts seeds—otherwise this file keeps the suite failing.

Also applies to: 44-74

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/server/modules/agents/__tests__/agents-manager.test.ts` around lines 11 -
12, The tests never exercise the spawn path because the module import happens
against an uninitialized singleton; before dynamically importing
spawnLocalAgent/stopLocalAgent, reset module cache and initialize the same
runtime singleton used by agents-manager.backups.test.ts so the lifecycle path
is seeded correctly. Concretely: call jest.resetModules(), run the same
setup/seeding helper (or replicate its init logic) that
agents-manager.backups.test.ts uses to seed the runtime singleton, then
dynamically import { spawnLocalAgent, stopLocalAgent } so spawn() is invoked
against the proper runtime instance.

Comment on lines +70 to +99
const requestBackupCancellation = async (agentId: string, scheduleId: number) => {
const activeBackupRun = getActiveBackupsByScheduleId().get(scheduleId);
if (!activeBackupRun) {
return false;
}

if (activeBackupRun.cancellationRequested) {
return true;
}

activeBackupRun.cancellationRequested = true;

const runtime = getAgentManagerRuntime();
if (!runtime) {
resolveActiveBackupRun(scheduleId, { status: "cancelled" });
return true;
}

if (
await runtime.cancelBackup(agentId, {
jobId: activeBackupRun.jobId,
scheduleId: activeBackupRun.scheduleShortId,
})
) {
return true;
}

resolveActiveBackupRun(scheduleId, { status: "cancelled" });
return true;
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Resolve locally when cancelBackup() rejects.

At Line 80, cancellationRequested is set before awaiting runtime.cancelBackup(). If that promise rejects, the active run stays registered and later calls hit the early return at Line 76, so the original completion promise can remain pending forever.

Suggested fix
 	activeBackupRun.cancellationRequested = true;
 
 	const runtime = getAgentManagerRuntime();
 	if (!runtime) {
 		resolveActiveBackupRun(scheduleId, { status: "cancelled" });
 		return true;
 	}
 
-	if (
-		await runtime.cancelBackup(agentId, {
-			jobId: activeBackupRun.jobId,
-			scheduleId: activeBackupRun.scheduleShortId,
-		})
-	) {
-		return true;
-	}
+	try {
+		if (
+			await runtime.cancelBackup(agentId, {
+				jobId: activeBackupRun.jobId,
+				scheduleId: activeBackupRun.scheduleShortId,
+			})
+		) {
+			return true;
+		}
+	} catch (error) {
+		logger.warn(
+			`Failed to cancel backup ${activeBackupRun.jobId} for schedule ${scheduleId}: ${
+				error instanceof Error ? error.message : String(error)
+			}`,
+		);
+	}
 
 	resolveActiveBackupRun(scheduleId, { status: "cancelled" });
 	return true;
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/server/modules/agents/agents-manager.ts` around lines 70 - 99, The
function requestBackupCancellation sets activeBackupRun.cancellationRequested
before awaiting runtime.cancelBackup, so if runtime.cancelBackup rejects the
active run remains registered and its completion promise can hang; fix it by
wrapping the await runtime.cancelBackup(...) call in a try/catch inside
requestBackupCancellation (use the existing activeBackupRun and runtime
variables), and in the catch call resolveActiveBackupRun(scheduleId, { status:
"cancelled" }) and return true so the active run is cleaned up; keep the
cancellationRequested flag semantics but ensure cleanup happens on rejection of
runtime.cancelBackup.

Comment on lines +149 to +152
const sendResult = socket.send(message);
if (sendResult <= 0) {
handleSendFailure(sendResult === 0 ? "connection issue" : "backpressure");
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the file and the context around lines 149-152
cd app/server/modules/agents/controller && wc -l session.ts && sed -n '140,160p' session.ts

Repository: nicotsx/zerobyte

Length of output: 588


🏁 Script executed:

# Search for ServerWebSocket usage and send() calls in the file to understand context
rg "socket\.send|handleSendFailure|closeSession" app/server/modules/agents/controller/session.ts -B 2 -A 2

Repository: nicotsx/zerobyte

Length of output: 921


🌐 Web query:

Bun ServerWebSocket.send() return value semantics documentation

💡 Result:

The Bun ServerWebSocket.send method returns a number indicating the result of the send operation, specifically for backpressure handling: - A value of 1 or greater (1+) represents the number of bytes successfully sent. - -1 means the message was enqueued but there is backpressure (the send buffer is full, and the message is queued for later transmission). - 0 means the message was dropped due to a connection issue (e.g., the connection is closed or unhealthy). This return value allows developers to handle backpressure explicitly, such as by pausing message sending when -1 is returned and resuming on the 'drain' event. Unlike the browser/client WebSocket.send which returns void/undefined, Bun's server-side implementation provides this numeric status for performance and reliability in server scenarios. Signature: send(data: string | BufferSource, compress?: boolean): number

Citations:


Don't close the session on Bun backpressure.

ServerWebSocket.send() returns -1 when the frame was enqueued under backpressure (the send buffer is full but the message is queued for later transmission), and 0 only when it was dropped because of a connection issue. Closing the session for all <= 0 results means a healthy agent experiencing buffer pressure will be disconnected and its backup cancelled even though the message was accepted. Restrict the failure path to 0, and handle -1 as backpressure instead.

Suggested fix
  const sendResult = socket.send(message);
- if (sendResult <= 0) {
-   handleSendFailure(sendResult === 0 ? "connection issue" : "backpressure");
- }
+ if (sendResult === 0) {
+   handleSendFailure("connection issue");
+ } else if (sendResult < 0) {
+   logger.warn(
+     `Backpressure while sending to agent ${socket.data.agentId} on ${socket.data.id}; keeping session open`,
+   );
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const sendResult = socket.send(message);
if (sendResult <= 0) {
handleSendFailure(sendResult === 0 ? "connection issue" : "backpressure");
}
const sendResult = socket.send(message);
if (sendResult === 0) {
handleSendFailure("connection issue");
} else if (sendResult < 0) {
logger.warn(
`Backpressure while sending to agent ${socket.data.agentId} on ${socket.data.id}; keeping session open`,
);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/server/modules/agents/controller/session.ts` around lines 149 - 152, The
current check treats socket.send() <= 0 as a fatal send failure and closes the
session; change it so only a return value of 0 triggers
handleSendFailure("connection issue") (i.e., close), while -1 is handled as
backpressure (call handleSendFailure("backpressure") or a non-fatal path) so
agents are not disconnected when the frame is enqueued; update the conditional
around socket.send(message) in the session send logic (reference socket.send and
handleSendFailure) to distinguish 0 vs -1 and ensure -1 does not close the
session.

Comment on lines +14 to +18
const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({
...runtime,
activeBackupsByScheduleId: runtime.activeBackupsByScheduleId ?? new Map(),
activeBackupScheduleIdsByJobId: runtime.activeBackupScheduleIdsByJobId ?? new Map(),
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hydrate legacy map fields by type, not by nullishness.

hasActiveBackupMaps() only reaches this branch when at least one field is missing or not a Map, but ?? preserves non-nullish invalid values. A hot-reloaded runtime like { activeBackupsByScheduleId: {} } will still be returned as AgentRuntimeState, and the next .get() / .set() will fail at runtime.

Suggested fix
 const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({
 	...runtime,
-	activeBackupsByScheduleId: runtime.activeBackupsByScheduleId ?? new Map(),
-	activeBackupScheduleIdsByJobId: runtime.activeBackupScheduleIdsByJobId ?? new Map(),
+	activeBackupsByScheduleId:
+		runtime.activeBackupsByScheduleId instanceof Map ? runtime.activeBackupsByScheduleId : new Map(),
+	activeBackupScheduleIdsByJobId:
+		runtime.activeBackupScheduleIdsByJobId instanceof Map
+			? runtime.activeBackupScheduleIdsByJobId
+			: new Map(),
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({
...runtime,
activeBackupsByScheduleId: runtime.activeBackupsByScheduleId ?? new Map(),
activeBackupScheduleIdsByJobId: runtime.activeBackupScheduleIdsByJobId ?? new Map(),
});
const hydrateAgentRuntimeState = (runtime: LegacyAgentRuntimeState): AgentRuntimeState => ({
...runtime,
activeBackupsByScheduleId:
runtime.activeBackupsByScheduleId instanceof Map ? runtime.activeBackupsByScheduleId : new Map(),
activeBackupScheduleIdsByJobId:
runtime.activeBackupScheduleIdsByJobId instanceof Map
? runtime.activeBackupScheduleIdsByJobId
: new Map(),
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/server/modules/agents/helpers/runtime-state.dev.ts` around lines 14 - 18,
hydrateAgentRuntimeState currently uses nullish coalescing (??) so non-null but
invalid values (e.g. {} from hot reload) are preserved; change it to validate
types instead: for both activeBackupsByScheduleId and
activeBackupScheduleIdsByJobId, replace the nullish checks with type checks
(e.g. runtime.activeBackupsByScheduleId instanceof Map ?
runtime.activeBackupsByScheduleId : new Map()) so only actual Map instances are
kept, aligning behavior with hasActiveBackupMaps and preventing runtime
.get()/.set() failures when legacy fields are present but not Maps.

Comment on lines +36 to +77
void (async () => {
const stderrLines: string[] = [];
const result = await resticBackupMock(
fromPartial<SafeSpawnParams>({
signal: request.signal,
onStderr: (line: string) => {
stderrLines.push(line);
},
}),
);
const running = runningBackups.get(request.scheduleId);
if (!running || running.cancelled) {
return;
}

runningBackups.delete(request.scheduleId);

if (result.exitCode === 0 || result.exitCode === 3) {
let parsedResult: Record<string, unknown> | null = null;
if (result.summary) {
try {
parsedResult = JSON.parse(result.summary) as Record<string, unknown>;
} catch {
parsedResult = null;
}
}

resolve({
status: "completed",
exitCode: result.exitCode,
result: fromAny(parsedResult),
warningDetails: stderrLines.join("\n") || null,
});
return;
}

const resultWithStderr = result as typeof result & { stderr?: string };
resolve({
status: "failed",
error: stderrLines.join("\n") || resultWithStderr.stderr || result.error,
});
})().catch(() => {});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Rejected mock backups currently hang the test instead of failing.

If resticBackupMock() rejects, the catch(() => {}) at Line 77 drops the error and leaves the outer promise unresolved. Anything awaiting runBackupMock() will sit until the test times out.

Suggested fix
-				void (async () => {
+				void (async () => {
 					const stderrLines: string[] = [];
 					const result = await resticBackupMock(
 						fromPartial<SafeSpawnParams>({
 							signal: request.signal,
 							onStderr: (line: string) => {
 								stderrLines.push(line);
@@
-				})().catch(() => {});
+				})().catch((error) => {
+					runningBackups.delete(request.scheduleId);
+					resolve({
+						status: "failed",
+						error: error instanceof Error ? error.message : String(error),
+					});
+				});
 			});
 		},
 	);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
void (async () => {
const stderrLines: string[] = [];
const result = await resticBackupMock(
fromPartial<SafeSpawnParams>({
signal: request.signal,
onStderr: (line: string) => {
stderrLines.push(line);
},
}),
);
const running = runningBackups.get(request.scheduleId);
if (!running || running.cancelled) {
return;
}
runningBackups.delete(request.scheduleId);
if (result.exitCode === 0 || result.exitCode === 3) {
let parsedResult: Record<string, unknown> | null = null;
if (result.summary) {
try {
parsedResult = JSON.parse(result.summary) as Record<string, unknown>;
} catch {
parsedResult = null;
}
}
resolve({
status: "completed",
exitCode: result.exitCode,
result: fromAny(parsedResult),
warningDetails: stderrLines.join("\n") || null,
});
return;
}
const resultWithStderr = result as typeof result & { stderr?: string };
resolve({
status: "failed",
error: stderrLines.join("\n") || resultWithStderr.stderr || result.error,
});
})().catch(() => {});
void (async () => {
const stderrLines: string[] = [];
const result = await resticBackupMock(
fromPartial<SafeSpawnParams>({
signal: request.signal,
onStderr: (line: string) => {
stderrLines.push(line);
},
}),
);
const running = runningBackups.get(request.scheduleId);
if (!running || running.cancelled) {
return;
}
runningBackups.delete(request.scheduleId);
if (result.exitCode === 0 || result.exitCode === 3) {
let parsedResult: Record<string, unknown> | null = null;
if (result.summary) {
try {
parsedResult = JSON.parse(result.summary) as Record<string, unknown>;
} catch {
parsedResult = null;
}
}
resolve({
status: "completed",
exitCode: result.exitCode,
result: fromAny(parsedResult),
warningDetails: stderrLines.join("\n") || null,
});
return;
}
const resultWithStderr = result as typeof result & { stderr?: string };
resolve({
status: "failed",
error: stderrLines.join("\n") || resultWithStderr.stderr || result.error,
});
})().catch((error) => {
runningBackups.delete(request.scheduleId);
resolve({
status: "failed",
error: error instanceof Error ? error.message : String(error),
});
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/test/helpers/agent-mock.ts` around lines 36 - 77, The IIFE swallowing
errors with catch(() => {}) causes runBackupMock callers to hang when
resticBackupMock rejects; update the catch to forward the failure: inside the
catch handler for the IIFE, call the outer resolve with a failed result (e.g.
resolve({ status: "failed", error: String(err) })) and clean up runningBackups
(delete(request.scheduleId)) so the outer promise is settled on error; reference
the resticBackupMock call and the existing resolve variable to implement this
change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant