Feat/one2b action items trpc#98
Conversation
… integration
Two new tRPC procedures on actionRouter to support the upcoming
one2b agent port. Both verify caller workspace membership before
any DB work and accept workspaceId explicitly so the caller never
relies on session defaults.
bulkCreateFromTranscript:
- Accepts a transcription session id + array of action items extracted
from the transcript
- For each item, applies 3-tier assignee resolution:
1) email matches a workspace User -> ActionAssignee
2) email matches an existing TranscriptionSessionParticipant
for this meeting -> ActionParticipantAssignee
3) email is unknown -> auto-create a Participant + assign
This avoids forcing every meeting attendee to onboard to exponential
just to be assigned a task
- Stamps each action with sourceType='meeting', sourceId=transcript id,
source='agent-transcript', lastUpdatedBy='AGENT' so the action's
origin is queryable later
- Maps agent priority (HIGH/MEDIUM/LOW) to exponential's priority
vocabulary; uses status='ACTIVE' (OVERDUE is derived at query time
from dueDate, IN_PROGRESS is tracked via kanbanStatus per existing
conventions)
- Per-item try/catch so a single bad item doesn't abort the batch;
failures are returned in `skipped` with a reason
findBySource:
- Workspace-scoped query by sourceType + optional sourceId
- Optional assigneeEmail filter that resolves through both
ActionAssignee (User) and ActionParticipantAssignee (Participant)
paths so callers don't need to know which kind of assignment
was used
Helper:
- findUserByEmailInWorkspace in workspaceResolver — lookup by email
+ WorkspaceUser membership check
Tests:
- 6 integration tests for bulkCreateFromTranscript covering each
assignee path, batch resilience, and authorization
- 5 integration tests for findBySource covering filtering and auth
Test infra:
- truncateAllTables now clears TranscriptionSession,
TranscriptionSessionParticipant, and ActionParticipantAssignee
so integration tests can run against the new tables
…nt in source queries
Two corrections surfaced by the mastra-side action-items-tools port:
1. action.update zod input was missing lastUpdatedBy + lastUpdatedSource,
so the agent couldn't stamp source attribution on updates the way
bulkCreateFromTranscript does on creates. Added both as optional
inputs (lastUpdatedBy is enum-restricted to AGENT/USER_EMAIL/
USER_WHATSAPP/USER_UI to match the schema convention).
2. bulkCreateFromTranscript and findBySource both included
participantAssignees: true without joining the participant relation,
so participant-backed assignees came back without name/email at
runtime. Switched to participantAssignees: { include: { participant:
true } } in both spots so callers get the participant data they
need to render assignee info.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds two agent-integration endpoints to the action router (bulkCreateFromTranscript, findBySource), extends action update input with optional attribution fields, adds a workspace user-by-email resolver, expands integration tests, and replaces test DB truncation with a dynamic, safer table-cleaning implementation. ChangesAction Endpoints & Test infra
Sequence Diagram(s)sequenceDiagram
actor Agent
participant Router as action.ts<br/>bulkCreateFromTranscript
participant Workspace as workspaceResolver<br/>findUserByEmailInWorkspace
participant DB as Prisma/Database
Agent->>Router: POST bulkCreateFromTranscript(workspaceId, transcriptId, items)
Router->>DB: Verify workspace membership & fetch transcript/project
loop per transcript item
Router->>Workspace: Resolve assignee by email
Workspace->>DB: Query user by email
Workspace->>DB: Verify workspaceUser membership
alt workspace user
Workspace-->>Router: user {id,email,name}
Router->>DB: Create action + attach user assignee
else not workspace user
Router->>DB: Query participant by email
alt participant exists
Router->>DB: Create action + attach participant assignee
else create participant
Router->>DB: Create participant, then create action + attach participant assignee
end
end
alt success
Router-->>Agent: add to created[]
else failure
Router-->>Agent: add to skipped[]
end
end
Router-->>Agent: Return { created, skipped }
sequenceDiagram
actor Client
participant Router as action.ts<br/>findBySource
participant Workspace as workspaceResolver<br/>findUserByEmailInWorkspace
participant DB as Prisma/Database
Client->>Router: POST findBySource(workspaceId, sourceType, [sourceId], [status], [assigneeEmail], [limit])
Router->>DB: Verify workspace membership
alt assigneeEmail provided
Router->>Workspace: Resolve assignee by email
Workspace->>DB: Query user & workspaceUser
alt workspace user
Router->>DB: Query actions filtered by user assignee + source + workspace
else
Router->>DB: Query actions filtered by participant email + source + workspace
end
else
Router->>DB: Query actions filtered by source + workspace
end
DB-->>Router: Return actions (includes + ordered + limited)
Router-->>Client: Return actions[]
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 44 minutes and 54 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/server/api/routers/action.ts`:
- Around line 2317-2329: The inline workspace membership check using
ctx.db.workspaceUser.findUnique (and throwing TRPCError) incorrectly excludes
team-based access; replace this manual check with the centralized access
resolver from src/server/services/access (e.g., call the workspace access
helper/resolver used elsewhere) so it considers both direct and team
memberships, remove the duplicate logic around workspaceUser.findUnique in the
action router, and ensure the resolver's boolean or thrown error is used
consistently (preserving the FORBIDDEN behavior) in the same locations where
ctx.db.workspaceUser.findUnique and TRPCError are currently referenced.
- Around line 2394-2469: Wrap the entire per-item write flow (the calls starting
with ctx.db.action.create and including findUserByEmailInWorkspace lookup,
ctx.db.actionAssignee.create, ctx.db.transcriptionSessionParticipant.create,
ctx.db.actionParticipantAssignee.create, and the final
ctx.db.action.findUniqueOrThrow hydration) in an item-scoped database
transaction so any failure rolls back the action row and related writes; use the
transaction API (e.g., ctx.db.$transaction or equivalent) to obtain a
transactional client (tx) and replace ctx.db calls inside the transaction with
tx, commit only on success, and only push the hydrated result into created after
the transaction completes, otherwise push the error into skipped.
- Around line 2293-2544: The router’s bulkCreateFromTranscript logic (including
assignee resolution, participant creation, provenance/source mapping and
per-item persistence) should be moved into a dedicated service (e.g.,
ActionService.bulkCreateFromTranscript) and the router should delegate to that
service; implement a service function that accepts the ctx (or db + session),
input (transcriptionSessionId, workspaceId, projectId, items) and returns
{created, skipped}, encapsulating the current logic around mapPriority,
findUserByEmailInWorkspace lookups, transcriptionSessionParticipant.create,
action.create, actionAssignee.create, actionParticipantAssignee.create, and the
includeShape hydration; also extract the source/query construction used in
findBySource into a service helper (e.g., ActionService.buildSourceQuery) so
findBySource delegates to it and only handles input validation and membership
checks; ensure the router still verifies workspace membership (as currently
done) and pass necessary context into the new service functions to preserve
transactions/partial failures and result shapes.
- Around line 2515-2527: When input.assigneeEmail is present, include both
assignee paths instead of choosing one: call
findUserByEmailInWorkspace(input.assigneeEmail, input.workspaceId) to get the
workspace user (if any) and then build the query so it matches either
ActionAssignee (where.assignees.some.userId === user.id) OR participantAssignees
(where.participantAssignees.some.participant.email === input.assigneeEmail); if
no workspace user exists still include the participantAssignees condition. Use
an OR clause on the where object (combining the assignees condition and the
participantAssignees condition) so both paths are searched when
input.assigneeEmail is provided.
In `@src/server/services/access/resolvers/workspaceResolver.ts`:
- Around line 93-117: The current findUserByEmailInWorkspace only checks
db.workspaceUser and so misses users who access the workspace via team
membership; update findUserByEmailInWorkspace to treat team-based access as
valid by calling or reusing the existing getWorkspaceMembership(user.id,
workspaceId) (from the same file) instead of only checking
db.workspaceUser.findUnique with userId_workspaceId; if getWorkspaceMembership
returns a membership, return the user object as before, otherwise return null.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a057f101-a72d-44c8-8b71-f9d904bd5d2b
📒 Files selected for processing (4)
src/server/api/routers/__tests__/action.integration.test.tssrc/server/api/routers/action.tssrc/server/services/access/resolvers/workspaceResolver.tssrc/test/test-db.ts
| bulkCreateFromTranscript: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| transcriptionSessionId: z.string(), | ||
| workspaceId: z.string(), | ||
| projectId: z.string().nullable().optional(), | ||
| items: z | ||
| .array( | ||
| z.object({ | ||
| description: z.string().min(1), | ||
| assigneeEmail: z.string().email().optional(), | ||
| assigneeName: z.string().optional(), | ||
| priority: z.enum(["HIGH", "MEDIUM", "LOW"]).default("MEDIUM"), | ||
| dueDate: z.string().datetime().optional(), | ||
| category: z.string().optional(), | ||
| rawText: z.string().optional(), | ||
| }), | ||
| ) | ||
| .min(1), | ||
| }), | ||
| ) | ||
| .mutation(async ({ ctx, input }) => { | ||
| const userId = ctx.session.user.id; | ||
|
|
||
| // 1. Verify caller workspace membership. | ||
| const membership = await ctx.db.workspaceUser.findUnique({ | ||
| where: { | ||
| userId_workspaceId: { userId, workspaceId: input.workspaceId }, | ||
| }, | ||
| select: { userId: true }, | ||
| }); | ||
| if (!membership) { | ||
| throw new TRPCError({ | ||
| code: "FORBIDDEN", | ||
| message: "You are not a member of this workspace", | ||
| }); | ||
| } | ||
|
|
||
| // 2. Verify the transcript exists and belongs to this workspace. | ||
| const transcript = await ctx.db.transcriptionSession.findUnique({ | ||
| where: { id: input.transcriptionSessionId }, | ||
| select: { id: true, workspaceId: true }, | ||
| }); | ||
| if (!transcript) { | ||
| throw new TRPCError({ | ||
| code: "NOT_FOUND", | ||
| message: "Transcription session not found", | ||
| }); | ||
| } | ||
| if (transcript.workspaceId !== input.workspaceId) { | ||
| throw new TRPCError({ | ||
| code: "FORBIDDEN", | ||
| message: "Transcription session does not belong to this workspace", | ||
| }); | ||
| } | ||
|
|
||
| // 3. Resolve effective projectId. Verify ownership if provided. | ||
| const resolvedProjectId: string | null = input.projectId ?? null; | ||
| if (resolvedProjectId) { | ||
| const project = await ctx.db.project.findUnique({ | ||
| where: { id: resolvedProjectId }, | ||
| select: { id: true, workspaceId: true }, | ||
| }); | ||
| if (!project || project.workspaceId !== input.workspaceId) { | ||
| throw new TRPCError({ | ||
| code: "FORBIDDEN", | ||
| message: "Project does not belong to this workspace", | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // 4. Map agent priority strings to internal priority values. | ||
| const mapPriority = (p: "HIGH" | "MEDIUM" | "LOW"): string => { | ||
| if (p === "HIGH") return "1st Priority"; | ||
| if (p === "LOW") return "5th Priority"; | ||
| return "Quick"; | ||
| }; | ||
|
|
||
| const includeShape = { | ||
| project: true, | ||
| transcriptionSession: { select: { id: true, title: true } }, | ||
| assignees: { | ||
| include: { | ||
| user: { select: { id: true, name: true, email: true, image: true } }, | ||
| }, | ||
| }, | ||
| participantAssignees: { include: { participant: true } }, | ||
| } satisfies Prisma.ActionInclude; | ||
|
|
||
| type CreatedAction = Prisma.ActionGetPayload<{ include: typeof includeShape }>; | ||
| const created: CreatedAction[] = []; | ||
| const skipped: { rawText: string | undefined; reason: string }[] = []; | ||
|
|
||
| // 5. Process each item with a per-item try/catch so one failure | ||
| // doesn't abort the whole batch. We deliberately do NOT wrap the | ||
| // loop in an outer transaction — each item is logically independent | ||
| // and we want partial successes to persist. | ||
| for (const item of input.items) { | ||
| try { | ||
| const dueDate = item.dueDate ? new Date(item.dueDate) : null; | ||
|
|
||
| const action = await ctx.db.action.create({ | ||
| data: { | ||
| name: item.description, | ||
| description: item.rawText ?? null, | ||
| dueDate, | ||
| priority: mapPriority(item.priority), | ||
| status: "ACTIVE", | ||
| workspaceId: input.workspaceId, | ||
| projectId: resolvedProjectId, | ||
| transcriptionSessionId: input.transcriptionSessionId, | ||
| createdById: userId, | ||
| source: "agent-transcript", | ||
| sourceType: "meeting", | ||
| sourceId: input.transcriptionSessionId, | ||
| lastUpdatedBy: "AGENT", | ||
| lastUpdatedSource: "agent-action-items-tool", | ||
| }, | ||
| }); | ||
|
|
||
| // 5b. Resolve assignee using 3-tier strategy. | ||
| if (item.assigneeEmail) { | ||
| const workspaceUser = await findUserByEmailInWorkspace( | ||
| item.assigneeEmail, | ||
| input.workspaceId, | ||
| ); | ||
|
|
||
| if (workspaceUser) { | ||
| await ctx.db.actionAssignee.create({ | ||
| data: { actionId: action.id, userId: workspaceUser.id }, | ||
| }); | ||
| } else { | ||
| const existingParticipant = | ||
| await ctx.db.transcriptionSessionParticipant.findUnique({ | ||
| where: { | ||
| transcriptionSessionId_email: { | ||
| transcriptionSessionId: input.transcriptionSessionId, | ||
| email: item.assigneeEmail, | ||
| }, | ||
| }, | ||
| select: { id: true }, | ||
| }); | ||
|
|
||
| const participantId = | ||
| existingParticipant?.id ?? | ||
| ( | ||
| await ctx.db.transcriptionSessionParticipant.create({ | ||
| data: { | ||
| transcriptionSessionId: input.transcriptionSessionId, | ||
| workspaceId: input.workspaceId, | ||
| email: item.assigneeEmail, | ||
| name: item.assigneeName ?? null, | ||
| }, | ||
| select: { id: true }, | ||
| }) | ||
| ).id; | ||
|
|
||
| await ctx.db.actionParticipantAssignee.create({ | ||
| data: { | ||
| actionId: action.id, | ||
| participantId, | ||
| workspaceId: input.workspaceId, | ||
| }, | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // Reload with the include shape so the returned object is consistent. | ||
| const hydrated = await ctx.db.action.findUniqueOrThrow({ | ||
| where: { id: action.id }, | ||
| include: includeShape, | ||
| }); | ||
| created.push(hydrated); | ||
| } catch (err) { | ||
| const reason = | ||
| err instanceof Error ? err.message : "Unknown error creating action"; | ||
| skipped.push({ rawText: item.rawText, reason }); | ||
| } | ||
| } | ||
|
|
||
| return { created, skipped }; | ||
| }), | ||
|
|
||
| // Look up actions by their (sourceType, sourceId) provenance, optionally | ||
| // filtered by assignee email and status. Used by the one2b agent to | ||
| // surface actions tied to a meeting/email/etc. | ||
| findBySource: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| workspaceId: z.string(), | ||
| sourceType: z.string(), | ||
| sourceId: z.string().optional(), | ||
| assigneeEmail: z.string().email().optional(), | ||
| status: z.string().optional(), | ||
| limit: z.number().min(1).max(100).default(20), | ||
| }), | ||
| ) | ||
| .query(async ({ ctx, input }) => { | ||
| const userId = ctx.session.user.id; | ||
|
|
||
| // Verify caller workspace membership. | ||
| const membership = await ctx.db.workspaceUser.findUnique({ | ||
| where: { | ||
| userId_workspaceId: { userId, workspaceId: input.workspaceId }, | ||
| }, | ||
| select: { userId: true }, | ||
| }); | ||
| if (!membership) { | ||
| throw new TRPCError({ | ||
| code: "FORBIDDEN", | ||
| message: "You are not a member of this workspace", | ||
| }); | ||
| } | ||
|
|
||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| const where: any = { | ||
| workspaceId: input.workspaceId, | ||
| sourceType: input.sourceType, | ||
| }; | ||
| if (input.sourceId) where.sourceId = input.sourceId; | ||
| if (input.status) where.status = input.status; | ||
|
|
||
| if (input.assigneeEmail) { | ||
| const user = await findUserByEmailInWorkspace( | ||
| input.assigneeEmail, | ||
| input.workspaceId, | ||
| ); | ||
| if (user) { | ||
| where.assignees = { some: { userId: user.id } }; | ||
| } else { | ||
| where.participantAssignees = { | ||
| some: { participant: { email: input.assigneeEmail } }, | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| return ctx.db.action.findMany({ | ||
| where, | ||
| include: { | ||
| project: true, | ||
| transcriptionSession: { select: { id: true, title: true } }, | ||
| assignees: { | ||
| include: { | ||
| user: { select: { id: true, name: true, email: true, image: true } }, | ||
| }, | ||
| }, | ||
| participantAssignees: { include: { participant: true } }, | ||
| }, | ||
| orderBy: { createdAt: "desc" }, | ||
| take: input.limit, | ||
| }); | ||
| }), |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | 🏗️ Heavy lift
Move this transcript-assignment flow into a service.
The router now owns assignee resolution, participant creation, provenance mapping, and source-query construction. Pulling that workflow into a service would make the access rules reusable, make the per-item transaction easier to implement cleanly, and keep the tRPC layer focused on input/output concerns.
As per coding guidelines, "All API logic must use tRPC for end-to-end type safety. Implement business logic in the services layer, not in routers."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/api/routers/action.ts` around lines 2293 - 2544, The router’s
bulkCreateFromTranscript logic (including assignee resolution, participant
creation, provenance/source mapping and per-item persistence) should be moved
into a dedicated service (e.g., ActionService.bulkCreateFromTranscript) and the
router should delegate to that service; implement a service function that
accepts the ctx (or db + session), input (transcriptionSessionId, workspaceId,
projectId, items) and returns {created, skipped}, encapsulating the current
logic around mapPriority, findUserByEmailInWorkspace lookups,
transcriptionSessionParticipant.create, action.create, actionAssignee.create,
actionParticipantAssignee.create, and the includeShape hydration; also extract
the source/query construction used in findBySource into a service helper (e.g.,
ActionService.buildSourceQuery) so findBySource delegates to it and only handles
input validation and membership checks; ensure the router still verifies
workspace membership (as currently done) and pass necessary context into the new
service functions to preserve transactions/partial failures and result shapes.
| const action = await ctx.db.action.create({ | ||
| data: { | ||
| name: item.description, | ||
| description: item.rawText ?? null, | ||
| dueDate, | ||
| priority: mapPriority(item.priority), | ||
| status: "ACTIVE", | ||
| workspaceId: input.workspaceId, | ||
| projectId: resolvedProjectId, | ||
| transcriptionSessionId: input.transcriptionSessionId, | ||
| createdById: userId, | ||
| source: "agent-transcript", | ||
| sourceType: "meeting", | ||
| sourceId: input.transcriptionSessionId, | ||
| lastUpdatedBy: "AGENT", | ||
| lastUpdatedSource: "agent-action-items-tool", | ||
| }, | ||
| }); | ||
|
|
||
| // 5b. Resolve assignee using 3-tier strategy. | ||
| if (item.assigneeEmail) { | ||
| const workspaceUser = await findUserByEmailInWorkspace( | ||
| item.assigneeEmail, | ||
| input.workspaceId, | ||
| ); | ||
|
|
||
| if (workspaceUser) { | ||
| await ctx.db.actionAssignee.create({ | ||
| data: { actionId: action.id, userId: workspaceUser.id }, | ||
| }); | ||
| } else { | ||
| const existingParticipant = | ||
| await ctx.db.transcriptionSessionParticipant.findUnique({ | ||
| where: { | ||
| transcriptionSessionId_email: { | ||
| transcriptionSessionId: input.transcriptionSessionId, | ||
| email: item.assigneeEmail, | ||
| }, | ||
| }, | ||
| select: { id: true }, | ||
| }); | ||
|
|
||
| const participantId = | ||
| existingParticipant?.id ?? | ||
| ( | ||
| await ctx.db.transcriptionSessionParticipant.create({ | ||
| data: { | ||
| transcriptionSessionId: input.transcriptionSessionId, | ||
| workspaceId: input.workspaceId, | ||
| email: item.assigneeEmail, | ||
| name: item.assigneeName ?? null, | ||
| }, | ||
| select: { id: true }, | ||
| }) | ||
| ).id; | ||
|
|
||
| await ctx.db.actionParticipantAssignee.create({ | ||
| data: { | ||
| actionId: action.id, | ||
| participantId, | ||
| workspaceId: input.workspaceId, | ||
| }, | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // Reload with the include shape so the returned object is consistent. | ||
| const hydrated = await ctx.db.action.findUniqueOrThrow({ | ||
| where: { id: action.id }, | ||
| include: includeShape, | ||
| }); | ||
| created.push(hydrated); | ||
| } catch (err) { | ||
| const reason = | ||
| err instanceof Error ? err.message : "Unknown error creating action"; | ||
| skipped.push({ rawText: item.rawText, reason }); |
There was a problem hiding this comment.
Make each transcript item atomic.
If anything fails after action.create() succeeds—participant creation, assignee linking, or the hydration read—the item is pushed into skipped even though the action row has already been persisted. That leaves partially applied data behind and makes the created/skipped result inaccurate. Please wrap the per-item write path in an item-scoped transaction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/api/routers/action.ts` around lines 2394 - 2469, Wrap the entire
per-item write flow (the calls starting with ctx.db.action.create and including
findUserByEmailInWorkspace lookup, ctx.db.actionAssignee.create,
ctx.db.transcriptionSessionParticipant.create,
ctx.db.actionParticipantAssignee.create, and the final
ctx.db.action.findUniqueOrThrow hydration) in an item-scoped database
transaction so any failure rolls back the action row and related writes; use the
transaction API (e.g., ctx.db.$transaction or equivalent) to obtain a
transactional client (tx) and replace ctx.db calls inside the transaction with
tx, commit only on success, and only push the hydrated result into created after
the transaction completes, otherwise push the error into skipped.
| if (input.assigneeEmail) { | ||
| const user = await findUserByEmailInWorkspace( | ||
| input.assigneeEmail, | ||
| input.workspaceId, | ||
| ); | ||
| if (user) { | ||
| where.assignees = { some: { userId: user.id } }; | ||
| } else { | ||
| where.participantAssignees = { | ||
| some: { participant: { email: input.assigneeEmail } }, | ||
| }; | ||
| } | ||
| } |
There was a problem hiding this comment.
Search both assignee paths for the same email.
Right now this picks ActionAssignee when the email belongs to a workspace user and skips the participantAssignees path entirely. That means participant-backed actions disappear whenever the same email also exists on a workspace user, which is the overlap this API needs to handle.
Suggested fix
if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
- if (user) {
- where.assignees = { some: { userId: user.id } };
- } else {
- where.participantAssignees = {
- some: { participant: { email: input.assigneeEmail } },
- };
- }
+ where.OR = [
+ ...(user ? [{ assignees: { some: { userId: user.id } } }] : []),
+ {
+ participantAssignees: {
+ some: { participant: { email: input.assigneeEmail } },
+ },
+ },
+ ];
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (input.assigneeEmail) { | |
| const user = await findUserByEmailInWorkspace( | |
| input.assigneeEmail, | |
| input.workspaceId, | |
| ); | |
| if (user) { | |
| where.assignees = { some: { userId: user.id } }; | |
| } else { | |
| where.participantAssignees = { | |
| some: { participant: { email: input.assigneeEmail } }, | |
| }; | |
| } | |
| } | |
| if (input.assigneeEmail) { | |
| const user = await findUserByEmailInWorkspace( | |
| input.assigneeEmail, | |
| input.workspaceId, | |
| ); | |
| where.OR = [ | |
| ...(user ? [{ assignees: { some: { userId: user.id } } }] : []), | |
| { | |
| participantAssignees: { | |
| some: { participant: { email: input.assigneeEmail } }, | |
| }, | |
| }, | |
| ]; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/api/routers/action.ts` around lines 2515 - 2527, When
input.assigneeEmail is present, include both assignee paths instead of choosing
one: call findUserByEmailInWorkspace(input.assigneeEmail, input.workspaceId) to
get the workspace user (if any) and then build the query so it matches either
ActionAssignee (where.assignees.some.userId === user.id) OR participantAssignees
(where.participantAssignees.some.participant.email === input.assigneeEmail); if
no workspace user exists still include the participantAssignees condition. Use
an OR clause on the where object (combining the assignees condition and the
participantAssignees condition) so both paths are searched when
input.assigneeEmail is provided.
| export async function findUserByEmailInWorkspace( | ||
| email: string, | ||
| workspaceId: string, | ||
| ): Promise<{ id: string; email: string; name: string | null } | null> { | ||
| const user = await db.user.findUnique({ | ||
| where: { email }, | ||
| select: { id: true, email: true, name: true }, | ||
| }); | ||
|
|
||
| if (!user?.email) { | ||
| return null; | ||
| } | ||
|
|
||
| const membership = await db.workspaceUser.findUnique({ | ||
| where: { | ||
| userId_workspaceId: { userId: user.id, workspaceId }, | ||
| }, | ||
| select: { userId: true }, | ||
| }); | ||
|
|
||
| if (!membership) { | ||
| return null; | ||
| } | ||
|
|
||
| return { id: user.id, email: user.email, name: user.name }; |
There was a problem hiding this comment.
Honor team-based workspace access in this email lookup.
This helper only checks WorkspaceUser, but getWorkspaceMembership() in the same file treats team membership as a valid workspace access path too. A user who only reaches the workspace through a team will resolve as null here, so transcript assignee resolution and findBySource({ assigneeEmail }) can miss legitimate users.
Suggested fix
export async function findUserByEmailInWorkspace(
email: string,
workspaceId: string,
): Promise<{ id: string; email: string; name: string | null } | null> {
const user = await db.user.findUnique({
where: { email },
select: { id: true, email: true, name: true },
});
if (!user?.email) {
return null;
}
- const membership = await db.workspaceUser.findUnique({
- where: {
- userId_workspaceId: { userId: user.id, workspaceId },
- },
- select: { userId: true },
- });
+ const membership = await getWorkspaceMembership(db, user.id, workspaceId);
if (!membership) {
return null;
}
return { id: user.id, email: user.email, name: user.name };
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/services/access/resolvers/workspaceResolver.ts` around lines 93 -
117, The current findUserByEmailInWorkspace only checks db.workspaceUser and so
misses users who access the workspace via team membership; update
findUserByEmailInWorkspace to treat team-based access as valid by calling or
reusing the existing getWorkspaceMembership(user.id, workspaceId) (from the same
file) instead of only checking db.workspaceUser.findUnique with
userId_workspaceId; if getWorkspaceMembership returns a membership, return the
user object as before, otherwise return null.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/server/api/routers/__tests__/action.integration.test.ts (1)
543-901: ⚡ Quick winAdd one regression test for
action.updateattribution.This PR also extends
action.updatewithlastUpdatedBy/lastUpdatedSource, but the new coverage only exercises the two new agent procedures. A small integration test here would keep that new attribution path from landing unverified.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/api/routers/__tests__/action.integration.test.ts` around lines 543 - 901, Add an integration test that verifies action.update sets the new attribution fields: create a user and workspace, create an action via db.action.create, call caller.action.update (using createTestCaller) to change the action, then load the action (via db.action.findUnique or the update result) and assert lastUpdatedBy and lastUpdatedSource match the expected values (e.g., "AGENT" and the agent source constant); reference functions/classes: action.update procedure, createTestCaller, db.action.create, db.action.findUnique to locate where to add the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/server/api/routers/__tests__/action.integration.test.ts`:
- Around line 671-706: Update the two tests that currently use
.rejects.toThrow(TRPCError) (the "rejects unauthorized workspace" and "rejects
mismatched transcript workspace" cases and the similar assertions around lines
888-899) to assert the specific tRPC error code instead of any TRPCError; after
calling createTestCaller(...).action.bulkCreateFromTranscript(...), use
.rejects.toMatchObject({ code: "FORBIDDEN" }) (or the appropriate code your
server sends) so the test verifies access-control failure precisely rather than
any server error; update both test cases to expect the exact error code.
In `@src/test/test-db.ts`:
- Around line 26-31: The current guard allows non-loopback DB URLs if the
database name contains "test"; instead require explicit opt-in via
ALLOW_NON_LOCAL_TEST_DB=1 for any non-loopback host. Update the host-check logic
in src/test/test-db.ts (the function/area that decides allowed DBs — look for
the code handling "localhost / 127.0.0.1 / ::1", "host.docker.internal", and the
database-name "test" check and the call site for truncateAllTables()) so that:
1) loopback hosts (localhost/127.0.0.1/::1) remain allowed; 2) any non-loopback
host (including host.docker.internal or hosts with DB names containing "test")
must only be allowed when process.env.ALLOW_NON_LOCAL_TEST_DB === "1"; and 3)
remove or disable the database-name contains "test" bypass; adjust error/log
message to clearly state the required ALLOW_NON_LOCAL_TEST_DB env var when
rejecting a non-loopback DB.
- Around line 118-120: The afterEach() cleanup must implement the documented
fallback when setting session_replication_role fails: wrap the current
transaction block that runs "SET LOCAL session_replication_role = 'replica'" and
the bulk truncate/delete in a try-catch inside afterEach(); on catching the
permission/privilege error from client.query (or the transaction helper used),
fall back to iterating the table list (the same tables array used in the
cleanup) and perform per-table DELETEs within individual statements, retrying
each DELETE when it fails with a foreign-key/constraint error until it succeeds
(or until a small retry limit is hit) so that dependent rows are removed in
successive passes; ensure you still run the same BEGIN/COMMIT/ROLLBACK around
the primary approach and only use the per-statement retry loop when the SET
LOCAL session_replication_role = 'replica' privilege is denied.
---
Nitpick comments:
In `@src/server/api/routers/__tests__/action.integration.test.ts`:
- Around line 543-901: Add an integration test that verifies action.update sets
the new attribution fields: create a user and workspace, create an action via
db.action.create, call caller.action.update (using createTestCaller) to change
the action, then load the action (via db.action.findUnique or the update result)
and assert lastUpdatedBy and lastUpdatedSource match the expected values (e.g.,
"AGENT" and the agent source constant); reference functions/classes:
action.update procedure, createTestCaller, db.action.create,
db.action.findUnique to locate where to add the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e3a84fcb-162b-49c3-ae21-af2fe04075e1
📒 Files selected for processing (2)
src/server/api/routers/__tests__/action.integration.test.tssrc/test/test-db.ts
| it("rejects unauthorized workspace", async () => { | ||
| const { ws, session } = await setupTranscript(); | ||
| const stranger = await createUser(db); | ||
|
|
||
| const caller = createTestCaller(stranger.id); | ||
| await expect( | ||
| caller.action.bulkCreateFromTranscript({ | ||
| transcriptionSessionId: session.id, | ||
| workspaceId: ws.id, | ||
| items: [{ description: "Nope", priority: "MEDIUM" }], | ||
| }), | ||
| ).rejects.toThrow(TRPCError); | ||
| }); | ||
|
|
||
| it("rejects mismatched transcript workspace", async () => { | ||
| const owner = await createUser(db); | ||
| const ws1 = await createWorkspace(db, { ownerId: owner.id }); | ||
| const ws2 = await createWorkspace(db, { ownerId: owner.id }); | ||
| const session = await db.transcriptionSession.create({ | ||
| data: { | ||
| sessionId: `sess-${Math.random().toString(36).slice(2, 10)}`, | ||
| workspaceId: ws2.id, | ||
| userId: owner.id, | ||
| title: "Other workspace", | ||
| }, | ||
| }); | ||
|
|
||
| const caller = createTestCaller(owner.id); | ||
| await expect( | ||
| caller.action.bulkCreateFromTranscript({ | ||
| transcriptionSessionId: session.id, | ||
| workspaceId: ws1.id, | ||
| items: [{ description: "Nope", priority: "MEDIUM" }], | ||
| }), | ||
| ).rejects.toThrow(TRPCError); | ||
| }); |
There was a problem hiding this comment.
Assert the specific tRPC error codes here.
.rejects.toThrow(TRPCError) will pass for any server-side tRPC failure, including unrelated validation or NOT_FOUND paths. These tests are meant to pin access-control behavior, so they should verify the expected code too.
Also applies to: 888-899
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/api/routers/__tests__/action.integration.test.ts` around lines 671
- 706, Update the two tests that currently use .rejects.toThrow(TRPCError) (the
"rejects unauthorized workspace" and "rejects mismatched transcript workspace"
cases and the similar assertions around lines 888-899) to assert the specific
tRPC error code instead of any TRPCError; after calling
createTestCaller(...).action.bulkCreateFromTranscript(...), use
.rejects.toMatchObject({ code: "FORBIDDEN" }) (or the appropriate code your
server sends) so the test verifies access-control failure precisely rather than
any server error; update both test cases to expect the exact error code.
| // Allow: | ||
| // - localhost / 127.0.0.1 / ::1 hosts | ||
| // - testcontainers (host.docker.internal, dynamic ports) | ||
| // - URLs whose database name contains "test" (e.g. exponential_test) | ||
| // - explicit opt-in via ALLOW_NON_LOCAL_TEST_DB=1 (escape hatch for | ||
| // deliberate scenarios; never set this in normal dev) |
There was a problem hiding this comment.
Require explicit opt-in for any non-loopback database.
Allowing a remote URL solely because the database name contains test still lets truncateAllTables() wipe shared QA/staging data. This guard only becomes reliable if non-local hosts always require ALLOW_NON_LOCAL_TEST_DB=1.
Safer guard
- const dbNameLooksLikeTest = /test/i.test(dbName);
const explicitOptIn = process.env.ALLOW_NON_LOCAL_TEST_DB === "1";
- if (!isLocalhost && !dbNameLooksLikeTest && !explicitOptIn) {
+ if (!isLocalhost && !explicitOptIn) {Also applies to: 40-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/test/test-db.ts` around lines 26 - 31, The current guard allows
non-loopback DB URLs if the database name contains "test"; instead require
explicit opt-in via ALLOW_NON_LOCAL_TEST_DB=1 for any non-loopback host. Update
the host-check logic in src/test/test-db.ts (the function/area that decides
allowed DBs — look for the code handling "localhost / 127.0.0.1 / ::1",
"host.docker.internal", and the database-name "test" check and the call site for
truncateAllTables()) so that: 1) loopback hosts (localhost/127.0.0.1/::1) remain
allowed; 2) any non-loopback host (including host.docker.internal or hosts with
DB names containing "test") must only be allowed when
process.env.ALLOW_NON_LOCAL_TEST_DB === "1"; and 3) remove or disable the
database-name contains "test" bypass; adjust error/log message to clearly state
the required ALLOW_NON_LOCAL_TEST_DB env var when rejecting a non-loopback DB.
| * - CON: `session_replication_role = replica` requires SUPERUSER or | ||
| * REPLICATION privileges. If that fails, fall back to a per-statement loop | ||
| * that retries until no FK errors remain. |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In PostgreSQL 16, does SET LOCAL session_replication_role = 'replica' require superuser privileges, and what error should non-superusers expect?
💡 Result:
In PostgreSQL 16, SET LOCAL session_replication_role = 'replica' requires either superuser privileges or the appropriate SET privilege on the session_replication_role parameter, which can be granted via GRANT SET ON PARAMETER session_replication_role TO user;. Non-superusers without this privilege will receive the error: "permission denied to set parameter "session_replication_role"".
Citations:
- 1: http://www.postgresql.org/docs/16/runtime-config-client.html
- 2: https://postgresqlco.nf/doc/en/param/session_replication_role/
- 3: https://stackoverflow.com/questions/73195844/grant-privileges-to-set-session-replication-role
- 4: DriverException: set session_replication_role = 'replica'; - permission denied to set parameter "session_replication_role" mikro-orm/mikro-orm#1612
- 5: [v1.0.0b] - Postgresql backup restore fails when not user postgres superuser mealie-recipes/mealie#1500
🏁 Script executed:
fd -t f "test-db.ts" | head -5Repository: positonic/exponential
Length of output: 85
🏁 Script executed:
cat -n src/test/test-db.ts | sed -n '100,170p'Repository: positonic/exponential
Length of output: 3476
🏁 Script executed:
# Search for error handling around session_replication_role in the file
rg -n "session_replication_role|catch.*error" src/test/test-db.ts -A 2 -B 2Repository: positonic/exponential
Length of output: 1193
🏁 Script executed:
# Check if deleteAllTablesWithFkAwareRetries function exists
rg -n "deleteAllTablesWithFkAwareRetries" src/test/test-db.tsRepository: positonic/exponential
Length of output: 47
🏁 Script executed:
# Search entire codebase for deleteAllTablesWithFkAwareRetries
rg "deleteAllTablesWithFkAwareRetries" --type ts --type tsxRepository: positonic/exponential
Length of output: 93
🏁 Script executed:
# Search for any error handling around session_replication_role in codebase
rg "session_replication_role" -A 5 -B 5Repository: positonic/exponential
Length of output: 2891
🏁 Script executed:
# Check if there's any catch block related to FK or replication role elsewhere
rg "catch.*error|catch\s*\(" src/test/ --type ts --type tsx -B 2 -A 3Repository: positonic/exponential
Length of output: 93
🏁 Script executed:
# Search for catch blocks in test-db.ts
rg "catch" src/test/test-db.ts -B 2 -A 3Repository: positonic/exponential
Length of output: 47
🏁 Script executed:
# Get file type information
fd -t f "test-db.ts" -x file {}Repository: positonic/exponential
Length of output: 101
🏁 Script executed:
# Check entire test directory for error handling patterns
rg "catch.*error" src/test/ -B 2 -A 3Repository: positonic/exponential
Length of output: 47
🏁 Script executed:
# Read the full function to understand the complete picture
cat -n src/test/test-db.ts | sed -n '122,157p'Repository: positonic/exponential
Length of output: 1732
🏁 Script executed:
# Check if there are any similar fallback/retry functions in the codebase
rg "retry|fallback|FK.*error" src/ -iRepository: positonic/exponential
Length of output: 19510
Implement the documented fallback when session_replication_role cannot be set.
The docstring (lines 118-120) explicitly documents a fallback: "If that fails, fall back to a per-statement loop that retries until no FK errors remain." However, the implementation has no error handling for this scenario. On managed Postgres where non-superusers cannot execute SET LOCAL session_replication_role = 'replica', the entire afterEach() cleanup fails instead of falling back. Wrap the transaction in try-catch and handle the privilege error by falling back to individual deletes with retry logic:
Fallback sketch
- await db.$transaction(statements, {
- // Generous timeout for remote test DBs where ~150 round-trips can be slow.
- timeout: 60_000,
- maxWait: 10_000,
- });
+ try {
+ await db.$transaction(statements, {
+ // Generous timeout for remote test DBs where ~150 round-trips can be slow.
+ timeout: 60_000,
+ maxWait: 10_000,
+ });
+ } catch (error) {
+ if (!isSessionReplicationRolePrivilegeError(error)) throw error;
+ await deleteAllTablesWithFkAwareRetries(db, quotedTables);
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/test/test-db.ts` around lines 118 - 120, The afterEach() cleanup must
implement the documented fallback when setting session_replication_role fails:
wrap the current transaction block that runs "SET LOCAL session_replication_role
= 'replica'" and the bulk truncate/delete in a try-catch inside afterEach(); on
catching the permission/privilege error from client.query (or the transaction
helper used), fall back to iterating the table list (the same tables array used
in the cleanup) and perform per-table DELETEs within individual statements,
retrying each DELETE when it fails with a foreign-key/constraint error until it
succeeds (or until a small retry limit is hit) so that dependent rows are
removed in successive passes; ensure you still run the same
BEGIN/COMMIT/ROLLBACK around the primary approach and only use the per-statement
retry loop when the SET LOCAL session_replication_role = 'replica' privilege is
denied.
Description
Summary
Two new tRPC procedures on
actionRouterfor the upcoming one2bagent integration. Foundational work — both procedures are dormant
until the companion mastra PR's gated agent calls them.
What this adds
actionRouter.bulkCreateFromTranscript(mutation)Bulk-creates
Actionrows from a list of items extracted by the agentfrom a meeting transcript.
session belongs to the same workspace
doesn't have to know whether the assignee has an exponential account:
User→ActionAssigneeTranscriptionSessionParticipantforthis meeting →
ActionParticipantAssigneeActionParticipantAssigneesourceType='meeting',sourceId=transcriptionSessionId,source='agent-transcript',lastUpdatedBy='AGENT'for downstreamaudit / discovery
("1st Priority"/"Quick"/"5th Priority")
failures returned in
skippedwith a reasonactionRouter.findBySource(query)Workspace-scoped lookup by
sourceType+ optionalsourceId. OptionalassigneeEmailfilter resolves through both the User-backedActionAssigneepath and the Participant-backedActionParticipantAssigneepath so callers don't need to know whichassignment kind was used.
Helper —
findUserByEmailInWorkspaceAdded to
src/server/services/access/resolvers/workspaceResolver.ts.Looks up a User by email and verifies workspace membership in one call.
Reusable for any future agent integration that resolves humans by email.
Updated —
actionRouter.updateAdded optional
lastUpdatedBy(enum:AGENT/USER_EMAIL/USER_WHATSAPP/USER_UI)and
lastUpdatedSource(string) inputs so agents can stamp sourceattribution on updates the same way
bulkCreateFromTranscriptdoes oncreates.
Test infra
bulkCreateFromTranscriptcovering eachassignee path, batch resilience, and authorization
findBySourcecovering filtering and authtruncateAllTablesnow clearsTranscriptionSession,TranscriptionSessionParticipant,ActionParticipantAssigneeInternal — query include shape
bulkCreateFromTranscriptandfindBySourcebothinclude: { participant: true }on
participantAssigneesso callers get participant name/email renderedwithout a follow-up query.
Why this is safe to merge
These procedures have no callers in the existing codebase. The mastra
side that consumes them is gated behind
MASTRA_ONE2B_AGENTS_ENABLED(off by default) so even after both PRs merge, nothing exercises this
path until cutover. See
mastra/docs/one2b-cutover.mdfor the full sequence.
Test plan
npm run check)npm run test:integration)bulkCreateFromTranscriptagainst a real workspace + transcript;confirm Action rows + ActionAssignee/ActionParticipantAssignee
rows created as expected
Companion PR
Mastra: https://github.com/positonic/mastra/pull/new/feat/one2b-action-items-agent
Type of Change
Database Changes
If migrations are included:
Testing
Checklist
Deployment Notes
Any special deployment considerations?
Related Issues
Closes #
Summary by CodeRabbit
New Features
Tests
Chores