Skip to content

Feat/one2b action items trpc#98

Open
Prajjawalk wants to merge 4 commits intomainfrom
feat/one2b-action-items-trpc
Open

Feat/one2b action items trpc#98
Prajjawalk wants to merge 4 commits intomainfrom
feat/one2b-action-items-trpc

Conversation

@Prajjawalk
Copy link
Copy Markdown
Collaborator

@Prajjawalk Prajjawalk commented May 2, 2026

Description

Summary

Two new tRPC procedures on actionRouter for the upcoming one2b
agent integration. Foundational work — both procedures are dormant
until the companion mastra PR's gated agent calls them.

What this adds

actionRouter.bulkCreateFromTranscript (mutation)

Bulk-creates Action rows from a list of items extracted by the agent
from a meeting transcript.

  • Verifies caller's workspace membership and that the transcription
    session belongs to the same workspace
  • For each item, applies a 3-tier assignee resolution so the agent
    doesn't have to know whether the assignee has an exponential account:
    1. Email matches a workspace UserActionAssignee
    2. Email matches an existing TranscriptionSessionParticipant for
      this meeting → ActionParticipantAssignee
    3. Email is unknown → auto-create a Participant + ActionParticipantAssignee
  • Stamps each action with sourceType='meeting', sourceId=transcriptionSessionId,
    source='agent-transcript', lastUpdatedBy='AGENT' for downstream
    audit / discovery
  • Maps agent priority (HIGH/MEDIUM/LOW) → exponential's
    ("1st Priority"/"Quick"/"5th Priority")
  • Per-item try/catch so a single bad item doesn't abort the batch;
    failures returned in skipped with a reason

actionRouter.findBySource (query)

Workspace-scoped lookup by sourceType + optional sourceId. Optional
assigneeEmail filter resolves through both the User-backed
ActionAssignee path and the Participant-backed
ActionParticipantAssignee path so callers don't need to know which
assignment kind was used.

Helper — findUserByEmailInWorkspace

Added to src/server/services/access/resolvers/workspaceResolver.ts.
Looks up a User by email and verifies workspace membership in one call.
Reusable for any future agent integration that resolves humans by email.

Updated — actionRouter.update

Added optional lastUpdatedBy (enum: AGENT/USER_EMAIL/USER_WHATSAPP/USER_UI)
and lastUpdatedSource (string) inputs so agents can stamp source
attribution on updates the same way bulkCreateFromTranscript does on
creates.

Test infra

  • 6 integration tests for bulkCreateFromTranscript covering each
    assignee path, batch resilience, and authorization
  • 5 integration tests for findBySource covering filtering and auth
  • truncateAllTables now clears TranscriptionSession,
    TranscriptionSessionParticipant, ActionParticipantAssignee

Internal — query include shape

bulkCreateFromTranscript and findBySource both include: { participant: true }
on participantAssignees so callers get participant name/email rendered
without a follow-up query.

Why this is safe to merge

These procedures have no callers in the existing codebase. The mastra
side that consumes them is gated behind MASTRA_ONE2B_AGENTS_ENABLED
(off by default) so even after both PRs merge, nothing exercises this
path until cutover. See mastra/docs/one2b-cutover.md
for the full sequence.

Test plan

  • Lint + typecheck pass (npm run check)
  • New integration tests pass (npm run test:integration)
  • No existing routers / Prisma calls regressed
  • Manual sanity: in Prisma Studio, manually invoke
    bulkCreateFromTranscript against a real workspace + transcript;
    confirm Action rows + ActionAssignee/ActionParticipantAssignee
    rows created as expected

Companion PR

Mastra: https://github.com/positonic/mastra/pull/new/feat/one2b-action-items-agent

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Performance improvement
  • Code refactoring

Database Changes

  • This PR includes database schema changes (migrations)
  • NO database changes in this PR

If migrations are included:

  • Migration name(s):
  • Schema changes:
  • Coordinated with team: Yes/No

Testing

  • Tested locally
  • Tested in preview deployment
  • Added/updated tests
  • All tests passing

Checklist

  • My code follows the project's style guidelines
  • I have performed a self-review of my own code
  • I have commented my code in hard-to-understand areas
  • My changes generate no new warnings
  • Database migrations tested on test database
  • Coordinated with team on any migration conflicts

Deployment Notes

Any special deployment considerations?

Related Issues

Closes #

Summary by CodeRabbit

  • New Features

    • Bulk-create actions from transcripts with per-item error isolation and automatic assignee resolution by email
    • Discover/filter actions by source type & ID, with optional assignee and status filtering
    • Action updates can include attribution fields (agent/user/email/WhatsApp/UI)
  • Tests

    • Added integration tests covering bulk creation and find-by-source behaviors
  • Chores

    • Safer test DB startup and robust table truncation for integration tests

… integration

Two new tRPC procedures on actionRouter to support the upcoming
one2b agent port. Both verify caller workspace membership before
any DB work and accept workspaceId explicitly so the caller never
relies on session defaults.

bulkCreateFromTranscript:
- Accepts a transcription session id + array of action items extracted
  from the transcript
- For each item, applies 3-tier assignee resolution:
  1) email matches a workspace User -> ActionAssignee
  2) email matches an existing TranscriptionSessionParticipant
     for this meeting -> ActionParticipantAssignee
  3) email is unknown -> auto-create a Participant + assign
  This avoids forcing every meeting attendee to onboard to exponential
  just to be assigned a task
- Stamps each action with sourceType='meeting', sourceId=transcript id,
  source='agent-transcript', lastUpdatedBy='AGENT' so the action's
  origin is queryable later
- Maps agent priority (HIGH/MEDIUM/LOW) to exponential's priority
  vocabulary; uses status='ACTIVE' (OVERDUE is derived at query time
  from dueDate, IN_PROGRESS is tracked via kanbanStatus per existing
  conventions)
- Per-item try/catch so a single bad item doesn't abort the batch;
  failures are returned in `skipped` with a reason

findBySource:
- Workspace-scoped query by sourceType + optional sourceId
- Optional assigneeEmail filter that resolves through both
  ActionAssignee (User) and ActionParticipantAssignee (Participant)
  paths so callers don't need to know which kind of assignment
  was used

Helper:
- findUserByEmailInWorkspace in workspaceResolver — lookup by email
  + WorkspaceUser membership check

Tests:
- 6 integration tests for bulkCreateFromTranscript covering each
  assignee path, batch resilience, and authorization
- 5 integration tests for findBySource covering filtering and auth

Test infra:
- truncateAllTables now clears TranscriptionSession,
  TranscriptionSessionParticipant, and ActionParticipantAssignee
  so integration tests can run against the new tables
…nt in source queries

Two corrections surfaced by the mastra-side action-items-tools port:

1. action.update zod input was missing lastUpdatedBy + lastUpdatedSource,
   so the agent couldn't stamp source attribution on updates the way
   bulkCreateFromTranscript does on creates. Added both as optional
   inputs (lastUpdatedBy is enum-restricted to AGENT/USER_EMAIL/
   USER_WHATSAPP/USER_UI to match the schema convention).

2. bulkCreateFromTranscript and findBySource both included
   participantAssignees: true without joining the participant relation,
   so participant-backed assignees came back without name/email at
   runtime. Switched to participantAssignees: { include: { participant:
   true } } in both spots so callers get the participant data they
   need to render assignee info.
@vercel
Copy link
Copy Markdown

vercel Bot commented May 2, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
exponential Ready Ready Preview, Comment May 2, 2026 4:17pm

Request Review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 2, 2026

Warning

Rate limit exceeded

@Prajjawalk has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 44 minutes and 54 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a0805a0c-39d5-4c6c-b36a-b726cfe7b1d0

📥 Commits

Reviewing files that changed from the base of the PR and between ffdbfb5 and f75c51c.

📒 Files selected for processing (2)
  • src/server/api/routers/__tests__/action.integration.test.ts
  • src/server/api/routers/action.ts
📝 Walkthrough

Walkthrough

Adds two agent-integration endpoints to the action router (bulkCreateFromTranscript, findBySource), extends action update input with optional attribution fields, adds a workspace user-by-email resolver, expands integration tests, and replaces test DB truncation with a dynamic, safer table-cleaning implementation.

Changes

Action Endpoints & Test infra

Layer / File(s) Summary
Data Shape & Imports
src/server/api/routers/action.ts
Add Prisma type-only import and import workspace email resolver to support typed includes and assignee resolution.
Workspace User Resolver
src/server/services/access/resolvers/workspaceResolver.ts
Add findUserByEmailInWorkspace(email, workspaceId) that returns { id, email, name } if the user exists and is a workspace member, otherwise null.
Attribution Fields
src/server/api/routers/action.ts (lines 589–595)
Extend actionRouter.update input schema with optional lastUpdatedBy enum and lastUpdatedSource string.
Bulk Create Endpoint
src/server/api/routers/action.ts (lines 2285–2474)
Add bulkCreateFromTranscript: validate workspace/transcript/project scope; map agent priorities; per-item create with source/sourceType/sourceId, lastUpdatedBy/lastUpdatedSource; resolve assignees by email via workspace-user → existing participant → create participant; collect created and skipped with per-item try/catch.
Find by Source Endpoint
src/server/api/routers/action.ts (lines 2476–2544)
Add findBySource: validate workspace membership; build Prisma where filter by workspaceId, sourceType, optional sourceId, optional status, optional assigneeEmail (resolves to workspace user or participant email); return actions with project/transcriptionSession/assignees/participantAssignees includes, ordered and limited.
Integration Tests
src/server/api/routers/__tests__/action.integration.test.ts (lines 544–901)
Add bulkCreateFromTranscript tests (workspace-member assignee resolution, participant fallback, auto-create participants, access/ownership rejections, per-item failure TODO) and findBySource tests (workspace-scoped source matching, assigneeEmail filtering for both workspace users and participants, limit handling, access control).
Test Infrastructure
src/test/test-db.ts
startTestDatabase() adds guard preventing running migrations against non-local/non-test DBs unless allowed; truncateAllTables() now discovers public tables from pg_tables, issues DELETE FROM "<table>" statements wrapped in a single transaction with session_replication_role = 'replica' and extended timeout/maxWait.

Sequence Diagram(s)

sequenceDiagram
    actor Agent
    participant Router as action.ts<br/>bulkCreateFromTranscript
    participant Workspace as workspaceResolver<br/>findUserByEmailInWorkspace
    participant DB as Prisma/Database

    Agent->>Router: POST bulkCreateFromTranscript(workspaceId, transcriptId, items)
    Router->>DB: Verify workspace membership & fetch transcript/project
    loop per transcript item
        Router->>Workspace: Resolve assignee by email
        Workspace->>DB: Query user by email
        Workspace->>DB: Verify workspaceUser membership
        alt workspace user
            Workspace-->>Router: user {id,email,name}
            Router->>DB: Create action + attach user assignee
        else not workspace user
            Router->>DB: Query participant by email
            alt participant exists
                Router->>DB: Create action + attach participant assignee
            else create participant
                Router->>DB: Create participant, then create action + attach participant assignee
            end
        end
        alt success
            Router-->>Agent: add to created[]
        else failure
            Router-->>Agent: add to skipped[]
        end
    end
    Router-->>Agent: Return { created, skipped }
Loading
sequenceDiagram
    actor Client
    participant Router as action.ts<br/>findBySource
    participant Workspace as workspaceResolver<br/>findUserByEmailInWorkspace
    participant DB as Prisma/Database

    Client->>Router: POST findBySource(workspaceId, sourceType, [sourceId], [status], [assigneeEmail], [limit])
    Router->>DB: Verify workspace membership
    alt assigneeEmail provided
        Router->>Workspace: Resolve assignee by email
        Workspace->>DB: Query user & workspaceUser
        alt workspace user
            Router->>DB: Query actions filtered by user assignee + source + workspace
        else
            Router->>DB: Query actions filtered by participant email + source + workspace
        end
    else
        Router->>DB: Query actions filtered by source + workspace
    end
    DB-->>Router: Return actions (includes + ordered + limited)
    Router-->>Client: Return actions[]
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Feat/one2b agent schema #97: Introduced schema/migration changes (ActionParticipantAssignee, TranscriptionSessionParticipant, source/attribution fields) that this PR's endpoints and test DB cleanup rely on.

Poem

🐰 I hopped through transcripts, nibbling lines with glee,

Bulk-creating actions from each little spree.
Assignees found or sprouted new, neat and spry,
Actions by source now leap up to the sky.
A rabbit’s small cheer — clean DBs and passing CI! 🥕✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Feat/one2b action items trpc' is related to the changeset, referring to the new tRPC procedures and one2b integration, but is somewhat vague and doesn't clearly convey the main change of adding bulkCreateFromTranscript and findBySource endpoints. Consider using a more specific title like 'Add bulkCreateFromTranscript and findBySource tRPC procedures for one2b agent integration' to better describe the primary changes.
✅ Passed checks (3 passed)
Check name Status Explanation
Description check ✅ Passed The pull request description is comprehensive, well-structured, and covers all required template sections including type of change, database changes, testing, and checklist, with detailed explanations of the new procedures and safety considerations.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/one2b-action-items-trpc

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 44 minutes and 54 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/server/api/routers/action.ts`:
- Around line 2317-2329: The inline workspace membership check using
ctx.db.workspaceUser.findUnique (and throwing TRPCError) incorrectly excludes
team-based access; replace this manual check with the centralized access
resolver from src/server/services/access (e.g., call the workspace access
helper/resolver used elsewhere) so it considers both direct and team
memberships, remove the duplicate logic around workspaceUser.findUnique in the
action router, and ensure the resolver's boolean or thrown error is used
consistently (preserving the FORBIDDEN behavior) in the same locations where
ctx.db.workspaceUser.findUnique and TRPCError are currently referenced.
- Around line 2394-2469: Wrap the entire per-item write flow (the calls starting
with ctx.db.action.create and including findUserByEmailInWorkspace lookup,
ctx.db.actionAssignee.create, ctx.db.transcriptionSessionParticipant.create,
ctx.db.actionParticipantAssignee.create, and the final
ctx.db.action.findUniqueOrThrow hydration) in an item-scoped database
transaction so any failure rolls back the action row and related writes; use the
transaction API (e.g., ctx.db.$transaction or equivalent) to obtain a
transactional client (tx) and replace ctx.db calls inside the transaction with
tx, commit only on success, and only push the hydrated result into created after
the transaction completes, otherwise push the error into skipped.
- Around line 2293-2544: The router’s bulkCreateFromTranscript logic (including
assignee resolution, participant creation, provenance/source mapping and
per-item persistence) should be moved into a dedicated service (e.g.,
ActionService.bulkCreateFromTranscript) and the router should delegate to that
service; implement a service function that accepts the ctx (or db + session),
input (transcriptionSessionId, workspaceId, projectId, items) and returns
{created, skipped}, encapsulating the current logic around mapPriority,
findUserByEmailInWorkspace lookups, transcriptionSessionParticipant.create,
action.create, actionAssignee.create, actionParticipantAssignee.create, and the
includeShape hydration; also extract the source/query construction used in
findBySource into a service helper (e.g., ActionService.buildSourceQuery) so
findBySource delegates to it and only handles input validation and membership
checks; ensure the router still verifies workspace membership (as currently
done) and pass necessary context into the new service functions to preserve
transactions/partial failures and result shapes.
- Around line 2515-2527: When input.assigneeEmail is present, include both
assignee paths instead of choosing one: call
findUserByEmailInWorkspace(input.assigneeEmail, input.workspaceId) to get the
workspace user (if any) and then build the query so it matches either
ActionAssignee (where.assignees.some.userId === user.id) OR participantAssignees
(where.participantAssignees.some.participant.email === input.assigneeEmail); if
no workspace user exists still include the participantAssignees condition. Use
an OR clause on the where object (combining the assignees condition and the
participantAssignees condition) so both paths are searched when
input.assigneeEmail is provided.

In `@src/server/services/access/resolvers/workspaceResolver.ts`:
- Around line 93-117: The current findUserByEmailInWorkspace only checks
db.workspaceUser and so misses users who access the workspace via team
membership; update findUserByEmailInWorkspace to treat team-based access as
valid by calling or reusing the existing getWorkspaceMembership(user.id,
workspaceId) (from the same file) instead of only checking
db.workspaceUser.findUnique with userId_workspaceId; if getWorkspaceMembership
returns a membership, return the user object as before, otherwise return null.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a057f101-a72d-44c8-8b71-f9d904bd5d2b

📥 Commits

Reviewing files that changed from the base of the PR and between d395d59 and ec08114.

📒 Files selected for processing (4)
  • src/server/api/routers/__tests__/action.integration.test.ts
  • src/server/api/routers/action.ts
  • src/server/services/access/resolvers/workspaceResolver.ts
  • src/test/test-db.ts

Comment on lines +2293 to +2544
bulkCreateFromTranscript: protectedProcedure
.input(
z.object({
transcriptionSessionId: z.string(),
workspaceId: z.string(),
projectId: z.string().nullable().optional(),
items: z
.array(
z.object({
description: z.string().min(1),
assigneeEmail: z.string().email().optional(),
assigneeName: z.string().optional(),
priority: z.enum(["HIGH", "MEDIUM", "LOW"]).default("MEDIUM"),
dueDate: z.string().datetime().optional(),
category: z.string().optional(),
rawText: z.string().optional(),
}),
)
.min(1),
}),
)
.mutation(async ({ ctx, input }) => {
const userId = ctx.session.user.id;

// 1. Verify caller workspace membership.
const membership = await ctx.db.workspaceUser.findUnique({
where: {
userId_workspaceId: { userId, workspaceId: input.workspaceId },
},
select: { userId: true },
});
if (!membership) {
throw new TRPCError({
code: "FORBIDDEN",
message: "You are not a member of this workspace",
});
}

// 2. Verify the transcript exists and belongs to this workspace.
const transcript = await ctx.db.transcriptionSession.findUnique({
where: { id: input.transcriptionSessionId },
select: { id: true, workspaceId: true },
});
if (!transcript) {
throw new TRPCError({
code: "NOT_FOUND",
message: "Transcription session not found",
});
}
if (transcript.workspaceId !== input.workspaceId) {
throw new TRPCError({
code: "FORBIDDEN",
message: "Transcription session does not belong to this workspace",
});
}

// 3. Resolve effective projectId. Verify ownership if provided.
const resolvedProjectId: string | null = input.projectId ?? null;
if (resolvedProjectId) {
const project = await ctx.db.project.findUnique({
where: { id: resolvedProjectId },
select: { id: true, workspaceId: true },
});
if (!project || project.workspaceId !== input.workspaceId) {
throw new TRPCError({
code: "FORBIDDEN",
message: "Project does not belong to this workspace",
});
}
}

// 4. Map agent priority strings to internal priority values.
const mapPriority = (p: "HIGH" | "MEDIUM" | "LOW"): string => {
if (p === "HIGH") return "1st Priority";
if (p === "LOW") return "5th Priority";
return "Quick";
};

const includeShape = {
project: true,
transcriptionSession: { select: { id: true, title: true } },
assignees: {
include: {
user: { select: { id: true, name: true, email: true, image: true } },
},
},
participantAssignees: { include: { participant: true } },
} satisfies Prisma.ActionInclude;

type CreatedAction = Prisma.ActionGetPayload<{ include: typeof includeShape }>;
const created: CreatedAction[] = [];
const skipped: { rawText: string | undefined; reason: string }[] = [];

// 5. Process each item with a per-item try/catch so one failure
// doesn't abort the whole batch. We deliberately do NOT wrap the
// loop in an outer transaction — each item is logically independent
// and we want partial successes to persist.
for (const item of input.items) {
try {
const dueDate = item.dueDate ? new Date(item.dueDate) : null;

const action = await ctx.db.action.create({
data: {
name: item.description,
description: item.rawText ?? null,
dueDate,
priority: mapPriority(item.priority),
status: "ACTIVE",
workspaceId: input.workspaceId,
projectId: resolvedProjectId,
transcriptionSessionId: input.transcriptionSessionId,
createdById: userId,
source: "agent-transcript",
sourceType: "meeting",
sourceId: input.transcriptionSessionId,
lastUpdatedBy: "AGENT",
lastUpdatedSource: "agent-action-items-tool",
},
});

// 5b. Resolve assignee using 3-tier strategy.
if (item.assigneeEmail) {
const workspaceUser = await findUserByEmailInWorkspace(
item.assigneeEmail,
input.workspaceId,
);

if (workspaceUser) {
await ctx.db.actionAssignee.create({
data: { actionId: action.id, userId: workspaceUser.id },
});
} else {
const existingParticipant =
await ctx.db.transcriptionSessionParticipant.findUnique({
where: {
transcriptionSessionId_email: {
transcriptionSessionId: input.transcriptionSessionId,
email: item.assigneeEmail,
},
},
select: { id: true },
});

const participantId =
existingParticipant?.id ??
(
await ctx.db.transcriptionSessionParticipant.create({
data: {
transcriptionSessionId: input.transcriptionSessionId,
workspaceId: input.workspaceId,
email: item.assigneeEmail,
name: item.assigneeName ?? null,
},
select: { id: true },
})
).id;

await ctx.db.actionParticipantAssignee.create({
data: {
actionId: action.id,
participantId,
workspaceId: input.workspaceId,
},
});
}
}

// Reload with the include shape so the returned object is consistent.
const hydrated = await ctx.db.action.findUniqueOrThrow({
where: { id: action.id },
include: includeShape,
});
created.push(hydrated);
} catch (err) {
const reason =
err instanceof Error ? err.message : "Unknown error creating action";
skipped.push({ rawText: item.rawText, reason });
}
}

return { created, skipped };
}),

// Look up actions by their (sourceType, sourceId) provenance, optionally
// filtered by assignee email and status. Used by the one2b agent to
// surface actions tied to a meeting/email/etc.
findBySource: protectedProcedure
.input(
z.object({
workspaceId: z.string(),
sourceType: z.string(),
sourceId: z.string().optional(),
assigneeEmail: z.string().email().optional(),
status: z.string().optional(),
limit: z.number().min(1).max(100).default(20),
}),
)
.query(async ({ ctx, input }) => {
const userId = ctx.session.user.id;

// Verify caller workspace membership.
const membership = await ctx.db.workspaceUser.findUnique({
where: {
userId_workspaceId: { userId, workspaceId: input.workspaceId },
},
select: { userId: true },
});
if (!membership) {
throw new TRPCError({
code: "FORBIDDEN",
message: "You are not a member of this workspace",
});
}

// eslint-disable-next-line @typescript-eslint/no-explicit-any
const where: any = {
workspaceId: input.workspaceId,
sourceType: input.sourceType,
};
if (input.sourceId) where.sourceId = input.sourceId;
if (input.status) where.status = input.status;

if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
if (user) {
where.assignees = { some: { userId: user.id } };
} else {
where.participantAssignees = {
some: { participant: { email: input.assigneeEmail } },
};
}
}

return ctx.db.action.findMany({
where,
include: {
project: true,
transcriptionSession: { select: { id: true, title: true } },
assignees: {
include: {
user: { select: { id: true, name: true, email: true, image: true } },
},
},
participantAssignees: { include: { participant: true } },
},
orderBy: { createdAt: "desc" },
take: input.limit,
});
}),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major | 🏗️ Heavy lift

Move this transcript-assignment flow into a service.

The router now owns assignee resolution, participant creation, provenance mapping, and source-query construction. Pulling that workflow into a service would make the access rules reusable, make the per-item transaction easier to implement cleanly, and keep the tRPC layer focused on input/output concerns.

As per coding guidelines, "All API logic must use tRPC for end-to-end type safety. Implement business logic in the services layer, not in routers."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/action.ts` around lines 2293 - 2544, The router’s
bulkCreateFromTranscript logic (including assignee resolution, participant
creation, provenance/source mapping and per-item persistence) should be moved
into a dedicated service (e.g., ActionService.bulkCreateFromTranscript) and the
router should delegate to that service; implement a service function that
accepts the ctx (or db + session), input (transcriptionSessionId, workspaceId,
projectId, items) and returns {created, skipped}, encapsulating the current
logic around mapPriority, findUserByEmailInWorkspace lookups,
transcriptionSessionParticipant.create, action.create, actionAssignee.create,
actionParticipantAssignee.create, and the includeShape hydration; also extract
the source/query construction used in findBySource into a service helper (e.g.,
ActionService.buildSourceQuery) so findBySource delegates to it and only handles
input validation and membership checks; ensure the router still verifies
workspace membership (as currently done) and pass necessary context into the new
service functions to preserve transactions/partial failures and result shapes.

Comment thread src/server/api/routers/action.ts
Comment on lines +2394 to +2469
const action = await ctx.db.action.create({
data: {
name: item.description,
description: item.rawText ?? null,
dueDate,
priority: mapPriority(item.priority),
status: "ACTIVE",
workspaceId: input.workspaceId,
projectId: resolvedProjectId,
transcriptionSessionId: input.transcriptionSessionId,
createdById: userId,
source: "agent-transcript",
sourceType: "meeting",
sourceId: input.transcriptionSessionId,
lastUpdatedBy: "AGENT",
lastUpdatedSource: "agent-action-items-tool",
},
});

// 5b. Resolve assignee using 3-tier strategy.
if (item.assigneeEmail) {
const workspaceUser = await findUserByEmailInWorkspace(
item.assigneeEmail,
input.workspaceId,
);

if (workspaceUser) {
await ctx.db.actionAssignee.create({
data: { actionId: action.id, userId: workspaceUser.id },
});
} else {
const existingParticipant =
await ctx.db.transcriptionSessionParticipant.findUnique({
where: {
transcriptionSessionId_email: {
transcriptionSessionId: input.transcriptionSessionId,
email: item.assigneeEmail,
},
},
select: { id: true },
});

const participantId =
existingParticipant?.id ??
(
await ctx.db.transcriptionSessionParticipant.create({
data: {
transcriptionSessionId: input.transcriptionSessionId,
workspaceId: input.workspaceId,
email: item.assigneeEmail,
name: item.assigneeName ?? null,
},
select: { id: true },
})
).id;

await ctx.db.actionParticipantAssignee.create({
data: {
actionId: action.id,
participantId,
workspaceId: input.workspaceId,
},
});
}
}

// Reload with the include shape so the returned object is consistent.
const hydrated = await ctx.db.action.findUniqueOrThrow({
where: { id: action.id },
include: includeShape,
});
created.push(hydrated);
} catch (err) {
const reason =
err instanceof Error ? err.message : "Unknown error creating action";
skipped.push({ rawText: item.rawText, reason });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Make each transcript item atomic.

If anything fails after action.create() succeeds—participant creation, assignee linking, or the hydration read—the item is pushed into skipped even though the action row has already been persisted. That leaves partially applied data behind and makes the created/skipped result inaccurate. Please wrap the per-item write path in an item-scoped transaction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/action.ts` around lines 2394 - 2469, Wrap the entire
per-item write flow (the calls starting with ctx.db.action.create and including
findUserByEmailInWorkspace lookup, ctx.db.actionAssignee.create,
ctx.db.transcriptionSessionParticipant.create,
ctx.db.actionParticipantAssignee.create, and the final
ctx.db.action.findUniqueOrThrow hydration) in an item-scoped database
transaction so any failure rolls back the action row and related writes; use the
transaction API (e.g., ctx.db.$transaction or equivalent) to obtain a
transactional client (tx) and replace ctx.db calls inside the transaction with
tx, commit only on success, and only push the hydrated result into created after
the transaction completes, otherwise push the error into skipped.

Comment on lines +2515 to +2527
if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
if (user) {
where.assignees = { some: { userId: user.id } };
} else {
where.participantAssignees = {
some: { participant: { email: input.assigneeEmail } },
};
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Search both assignee paths for the same email.

Right now this picks ActionAssignee when the email belongs to a workspace user and skips the participantAssignees path entirely. That means participant-backed actions disappear whenever the same email also exists on a workspace user, which is the overlap this API needs to handle.

Suggested fix
       if (input.assigneeEmail) {
         const user = await findUserByEmailInWorkspace(
           input.assigneeEmail,
           input.workspaceId,
         );
-        if (user) {
-          where.assignees = { some: { userId: user.id } };
-        } else {
-          where.participantAssignees = {
-            some: { participant: { email: input.assigneeEmail } },
-          };
-        }
+        where.OR = [
+          ...(user ? [{ assignees: { some: { userId: user.id } } }] : []),
+          {
+            participantAssignees: {
+              some: { participant: { email: input.assigneeEmail } },
+            },
+          },
+        ];
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
if (user) {
where.assignees = { some: { userId: user.id } };
} else {
where.participantAssignees = {
some: { participant: { email: input.assigneeEmail } },
};
}
}
if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
where.OR = [
...(user ? [{ assignees: { some: { userId: user.id } } }] : []),
{
participantAssignees: {
some: { participant: { email: input.assigneeEmail } },
},
},
];
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/action.ts` around lines 2515 - 2527, When
input.assigneeEmail is present, include both assignee paths instead of choosing
one: call findUserByEmailInWorkspace(input.assigneeEmail, input.workspaceId) to
get the workspace user (if any) and then build the query so it matches either
ActionAssignee (where.assignees.some.userId === user.id) OR participantAssignees
(where.participantAssignees.some.participant.email === input.assigneeEmail); if
no workspace user exists still include the participantAssignees condition. Use
an OR clause on the where object (combining the assignees condition and the
participantAssignees condition) so both paths are searched when
input.assigneeEmail is provided.

Comment on lines +93 to +117
export async function findUserByEmailInWorkspace(
email: string,
workspaceId: string,
): Promise<{ id: string; email: string; name: string | null } | null> {
const user = await db.user.findUnique({
where: { email },
select: { id: true, email: true, name: true },
});

if (!user?.email) {
return null;
}

const membership = await db.workspaceUser.findUnique({
where: {
userId_workspaceId: { userId: user.id, workspaceId },
},
select: { userId: true },
});

if (!membership) {
return null;
}

return { id: user.id, email: user.email, name: user.name };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Honor team-based workspace access in this email lookup.

This helper only checks WorkspaceUser, but getWorkspaceMembership() in the same file treats team membership as a valid workspace access path too. A user who only reaches the workspace through a team will resolve as null here, so transcript assignee resolution and findBySource({ assigneeEmail }) can miss legitimate users.

Suggested fix
 export async function findUserByEmailInWorkspace(
   email: string,
   workspaceId: string,
 ): Promise<{ id: string; email: string; name: string | null } | null> {
   const user = await db.user.findUnique({
     where: { email },
     select: { id: true, email: true, name: true },
   });

   if (!user?.email) {
     return null;
   }

-  const membership = await db.workspaceUser.findUnique({
-    where: {
-      userId_workspaceId: { userId: user.id, workspaceId },
-    },
-    select: { userId: true },
-  });
+  const membership = await getWorkspaceMembership(db, user.id, workspaceId);

   if (!membership) {
     return null;
   }

   return { id: user.id, email: user.email, name: user.name };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/services/access/resolvers/workspaceResolver.ts` around lines 93 -
117, The current findUserByEmailInWorkspace only checks db.workspaceUser and so
misses users who access the workspace via team membership; update
findUserByEmailInWorkspace to treat team-based access as valid by calling or
reusing the existing getWorkspaceMembership(user.id, workspaceId) (from the same
file) instead of only checking db.workspaceUser.findUnique with
userId_workspaceId; if getWorkspaceMembership returns a membership, return the
user object as before, otherwise return null.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/server/api/routers/__tests__/action.integration.test.ts (1)

543-901: ⚡ Quick win

Add one regression test for action.update attribution.

This PR also extends action.update with lastUpdatedBy / lastUpdatedSource, but the new coverage only exercises the two new agent procedures. A small integration test here would keep that new attribution path from landing unverified.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/__tests__/action.integration.test.ts` around lines 543
- 901, Add an integration test that verifies action.update sets the new
attribution fields: create a user and workspace, create an action via
db.action.create, call caller.action.update (using createTestCaller) to change
the action, then load the action (via db.action.findUnique or the update result)
and assert lastUpdatedBy and lastUpdatedSource match the expected values (e.g.,
"AGENT" and the agent source constant); reference functions/classes:
action.update procedure, createTestCaller, db.action.create,
db.action.findUnique to locate where to add the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/server/api/routers/__tests__/action.integration.test.ts`:
- Around line 671-706: Update the two tests that currently use
.rejects.toThrow(TRPCError) (the "rejects unauthorized workspace" and "rejects
mismatched transcript workspace" cases and the similar assertions around lines
888-899) to assert the specific tRPC error code instead of any TRPCError; after
calling createTestCaller(...).action.bulkCreateFromTranscript(...), use
.rejects.toMatchObject({ code: "FORBIDDEN" }) (or the appropriate code your
server sends) so the test verifies access-control failure precisely rather than
any server error; update both test cases to expect the exact error code.

In `@src/test/test-db.ts`:
- Around line 26-31: The current guard allows non-loopback DB URLs if the
database name contains "test"; instead require explicit opt-in via
ALLOW_NON_LOCAL_TEST_DB=1 for any non-loopback host. Update the host-check logic
in src/test/test-db.ts (the function/area that decides allowed DBs — look for
the code handling "localhost / 127.0.0.1 / ::1", "host.docker.internal", and the
database-name "test" check and the call site for truncateAllTables()) so that:
1) loopback hosts (localhost/127.0.0.1/::1) remain allowed; 2) any non-loopback
host (including host.docker.internal or hosts with DB names containing "test")
must only be allowed when process.env.ALLOW_NON_LOCAL_TEST_DB === "1"; and 3)
remove or disable the database-name contains "test" bypass; adjust error/log
message to clearly state the required ALLOW_NON_LOCAL_TEST_DB env var when
rejecting a non-loopback DB.
- Around line 118-120: The afterEach() cleanup must implement the documented
fallback when setting session_replication_role fails: wrap the current
transaction block that runs "SET LOCAL session_replication_role = 'replica'" and
the bulk truncate/delete in a try-catch inside afterEach(); on catching the
permission/privilege error from client.query (or the transaction helper used),
fall back to iterating the table list (the same tables array used in the
cleanup) and perform per-table DELETEs within individual statements, retrying
each DELETE when it fails with a foreign-key/constraint error until it succeeds
(or until a small retry limit is hit) so that dependent rows are removed in
successive passes; ensure you still run the same BEGIN/COMMIT/ROLLBACK around
the primary approach and only use the per-statement retry loop when the SET
LOCAL session_replication_role = 'replica' privilege is denied.

---

Nitpick comments:
In `@src/server/api/routers/__tests__/action.integration.test.ts`:
- Around line 543-901: Add an integration test that verifies action.update sets
the new attribution fields: create a user and workspace, create an action via
db.action.create, call caller.action.update (using createTestCaller) to change
the action, then load the action (via db.action.findUnique or the update result)
and assert lastUpdatedBy and lastUpdatedSource match the expected values (e.g.,
"AGENT" and the agent source constant); reference functions/classes:
action.update procedure, createTestCaller, db.action.create,
db.action.findUnique to locate where to add the test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e3a84fcb-162b-49c3-ae21-af2fe04075e1

📥 Commits

Reviewing files that changed from the base of the PR and between ec08114 and ffdbfb5.

📒 Files selected for processing (2)
  • src/server/api/routers/__tests__/action.integration.test.ts
  • src/test/test-db.ts

Comment on lines +671 to +706
it("rejects unauthorized workspace", async () => {
const { ws, session } = await setupTranscript();
const stranger = await createUser(db);

const caller = createTestCaller(stranger.id);
await expect(
caller.action.bulkCreateFromTranscript({
transcriptionSessionId: session.id,
workspaceId: ws.id,
items: [{ description: "Nope", priority: "MEDIUM" }],
}),
).rejects.toThrow(TRPCError);
});

it("rejects mismatched transcript workspace", async () => {
const owner = await createUser(db);
const ws1 = await createWorkspace(db, { ownerId: owner.id });
const ws2 = await createWorkspace(db, { ownerId: owner.id });
const session = await db.transcriptionSession.create({
data: {
sessionId: `sess-${Math.random().toString(36).slice(2, 10)}`,
workspaceId: ws2.id,
userId: owner.id,
title: "Other workspace",
},
});

const caller = createTestCaller(owner.id);
await expect(
caller.action.bulkCreateFromTranscript({
transcriptionSessionId: session.id,
workspaceId: ws1.id,
items: [{ description: "Nope", priority: "MEDIUM" }],
}),
).rejects.toThrow(TRPCError);
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Assert the specific tRPC error codes here.

.rejects.toThrow(TRPCError) will pass for any server-side tRPC failure, including unrelated validation or NOT_FOUND paths. These tests are meant to pin access-control behavior, so they should verify the expected code too.

Also applies to: 888-899

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/__tests__/action.integration.test.ts` around lines 671
- 706, Update the two tests that currently use .rejects.toThrow(TRPCError) (the
"rejects unauthorized workspace" and "rejects mismatched transcript workspace"
cases and the similar assertions around lines 888-899) to assert the specific
tRPC error code instead of any TRPCError; after calling
createTestCaller(...).action.bulkCreateFromTranscript(...), use
.rejects.toMatchObject({ code: "FORBIDDEN" }) (or the appropriate code your
server sends) so the test verifies access-control failure precisely rather than
any server error; update both test cases to expect the exact error code.

Comment thread src/test/test-db.ts
Comment on lines +26 to +31
// Allow:
// - localhost / 127.0.0.1 / ::1 hosts
// - testcontainers (host.docker.internal, dynamic ports)
// - URLs whose database name contains "test" (e.g. exponential_test)
// - explicit opt-in via ALLOW_NON_LOCAL_TEST_DB=1 (escape hatch for
// deliberate scenarios; never set this in normal dev)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Require explicit opt-in for any non-loopback database.

Allowing a remote URL solely because the database name contains test still lets truncateAllTables() wipe shared QA/staging data. This guard only becomes reliable if non-local hosts always require ALLOW_NON_LOCAL_TEST_DB=1.

Safer guard
-    const dbNameLooksLikeTest = /test/i.test(dbName);
     const explicitOptIn = process.env.ALLOW_NON_LOCAL_TEST_DB === "1";

-    if (!isLocalhost && !dbNameLooksLikeTest && !explicitOptIn) {
+    if (!isLocalhost && !explicitOptIn) {

Also applies to: 40-52

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/test/test-db.ts` around lines 26 - 31, The current guard allows
non-loopback DB URLs if the database name contains "test"; instead require
explicit opt-in via ALLOW_NON_LOCAL_TEST_DB=1 for any non-loopback host. Update
the host-check logic in src/test/test-db.ts (the function/area that decides
allowed DBs — look for the code handling "localhost / 127.0.0.1 / ::1",
"host.docker.internal", and the database-name "test" check and the call site for
truncateAllTables()) so that: 1) loopback hosts (localhost/127.0.0.1/::1) remain
allowed; 2) any non-loopback host (including host.docker.internal or hosts with
DB names containing "test") must only be allowed when
process.env.ALLOW_NON_LOCAL_TEST_DB === "1"; and 3) remove or disable the
database-name contains "test" bypass; adjust error/log message to clearly state
the required ALLOW_NON_LOCAL_TEST_DB env var when rejecting a non-loopback DB.

Comment thread src/test/test-db.ts
Comment on lines +118 to +120
* - CON: `session_replication_role = replica` requires SUPERUSER or
* REPLICATION privileges. If that fails, fall back to a per-statement loop
* that retries until no FK errors remain.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In PostgreSQL 16, does SET LOCAL session_replication_role = 'replica' require superuser privileges, and what error should non-superusers expect?

💡 Result:

In PostgreSQL 16, SET LOCAL session_replication_role = 'replica' requires either superuser privileges or the appropriate SET privilege on the session_replication_role parameter, which can be granted via GRANT SET ON PARAMETER session_replication_role TO user;. Non-superusers without this privilege will receive the error: "permission denied to set parameter "session_replication_role"".

Citations:


🏁 Script executed:

fd -t f "test-db.ts" | head -5

Repository: positonic/exponential

Length of output: 85


🏁 Script executed:

cat -n src/test/test-db.ts | sed -n '100,170p'

Repository: positonic/exponential

Length of output: 3476


🏁 Script executed:

# Search for error handling around session_replication_role in the file
rg -n "session_replication_role|catch.*error" src/test/test-db.ts -A 2 -B 2

Repository: positonic/exponential

Length of output: 1193


🏁 Script executed:

# Check if deleteAllTablesWithFkAwareRetries function exists
rg -n "deleteAllTablesWithFkAwareRetries" src/test/test-db.ts

Repository: positonic/exponential

Length of output: 47


🏁 Script executed:

# Search entire codebase for deleteAllTablesWithFkAwareRetries
rg "deleteAllTablesWithFkAwareRetries" --type ts --type tsx

Repository: positonic/exponential

Length of output: 93


🏁 Script executed:

# Search for any error handling around session_replication_role in codebase
rg "session_replication_role" -A 5 -B 5

Repository: positonic/exponential

Length of output: 2891


🏁 Script executed:

# Check if there's any catch block related to FK or replication role elsewhere
rg "catch.*error|catch\s*\(" src/test/ --type ts --type tsx -B 2 -A 3

Repository: positonic/exponential

Length of output: 93


🏁 Script executed:

# Search for catch blocks in test-db.ts
rg "catch" src/test/test-db.ts -B 2 -A 3

Repository: positonic/exponential

Length of output: 47


🏁 Script executed:

# Get file type information
fd -t f "test-db.ts" -x file {}

Repository: positonic/exponential

Length of output: 101


🏁 Script executed:

# Check entire test directory for error handling patterns
rg "catch.*error" src/test/ -B 2 -A 3

Repository: positonic/exponential

Length of output: 47


🏁 Script executed:

# Read the full function to understand the complete picture
cat -n src/test/test-db.ts | sed -n '122,157p'

Repository: positonic/exponential

Length of output: 1732


🏁 Script executed:

# Check if there are any similar fallback/retry functions in the codebase
rg "retry|fallback|FK.*error" src/ -i

Repository: positonic/exponential

Length of output: 19510


Implement the documented fallback when session_replication_role cannot be set.

The docstring (lines 118-120) explicitly documents a fallback: "If that fails, fall back to a per-statement loop that retries until no FK errors remain." However, the implementation has no error handling for this scenario. On managed Postgres where non-superusers cannot execute SET LOCAL session_replication_role = 'replica', the entire afterEach() cleanup fails instead of falling back. Wrap the transaction in try-catch and handle the privilege error by falling back to individual deletes with retry logic:

Fallback sketch
-  await db.$transaction(statements, {
-    // Generous timeout for remote test DBs where ~150 round-trips can be slow.
-    timeout: 60_000,
-    maxWait: 10_000,
-  });
+  try {
+    await db.$transaction(statements, {
+      // Generous timeout for remote test DBs where ~150 round-trips can be slow.
+      timeout: 60_000,
+      maxWait: 10_000,
+    });
+  } catch (error) {
+    if (!isSessionReplicationRolePrivilegeError(error)) throw error;
+    await deleteAllTablesWithFkAwareRetries(db, quotedTables);
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/test/test-db.ts` around lines 118 - 120, The afterEach() cleanup must
implement the documented fallback when setting session_replication_role fails:
wrap the current transaction block that runs "SET LOCAL session_replication_role
= 'replica'" and the bulk truncate/delete in a try-catch inside afterEach(); on
catching the permission/privilege error from client.query (or the transaction
helper used), fall back to iterating the table list (the same tables array used
in the cleanup) and perform per-table DELETEs within individual statements,
retrying each DELETE when it fails with a foreign-key/constraint error until it
succeeds (or until a small retry limit is hit) so that dependent rows are
removed in successive passes; ensure you still run the same
BEGIN/COMMIT/ROLLBACK around the primary approach and only use the per-statement
retry loop when the SET LOCAL session_replication_role = 'replica' privilege is
denied.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant