[client] Release wasm js.FuncOf callbacks in ssh and rdp paths#5982
[client] Release wasm js.FuncOf callbacks in ssh and rdp paths#5982
Conversation
📝 WalkthroughWalkthroughRefactors Go/WASM JS callback lifecycle: cert validation uses buffered result/error channels, named promise handlers with sync.Once-coordinated Release and timeout increased to 5 minutes; RDP proxy creates atomic Changes
Sequence Diagram(s)sequenceDiagram
participant Browser
participant GoWASM
participant WebSocketServer
Browser->>GoWASM: Create proxy -> registers global handler handleRDCleanPathWebSocket_<id>
GoWASM->>GoWASM: generate atomic proxyID, store handler in pendingHandlers[proxyID]
Browser->>WebSocketServer: Open WebSocket to proxy endpoint
WebSocketServer->>Browser: WebSocket connected (onopen)
Browser->>GoWASM: Invoke global handler -> transfer pending handler to proxyConnection
GoWASM->>Browser: Attach onMessage/onClose callbacks (store js.Funcs on connection)
Browser->>GoWASM: Send messages / Close
GoWASM->>GoWASM: Handle messages/close via stored callbacks
GoWASM->>GoWASM: cleanupOnce -> detach callbacks, delete global handler, Release() js.Funcs, remove pendingHandlers/destinations/activeConnections
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
client/wasm/internal/rdp/rdcleanpath.go (1)
283-311:⚠️ Potential issue | 🔴 CriticalGuard
cleanupConnectionwithsync.Onceto prevent double-release panic, and conditionally release zero-valuejs.Funcfields.Two real edge cases exist:
Double-cleanup panic:
onCloseFn(line 184) callsconn.cancel()at any time, which can firectx.Done()insetupTLSConnection(line 153) whilehandleDirectRDP'sdefer cleanupConnectionis pending. The second call toconn.wsHandlerFn.Release()panics withsyscall/js: Release of already released Func. Add async.Oncefield to the connection struct to ensure cleanup runs only once.Zero-value Release crash: If
HandleWebSocketConnectionreturns at line 141 (no destination found) or other error paths,setupWebSocketHandlersis never called, leavingwsHandlerFn,onMessageFn, andonCloseFnas zero-valuejs.Funcfields. Calling.Release()on a zerojs.Funcis undefined behavior and crashes. Check each field before releasing, or initialize them with no-op functions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/wasm/internal/rdp/rdcleanpath.go` around lines 283 - 311, Add idempotent cleanup to avoid double-release panics by adding a sync.Once field (e.g., cleanupOnce) to proxyConnection and wrap the body of RDCleanPathProxy.cleanupConnection in conn.cleanupOnce.Do(func(){ ... }). Also guard releasing JS callbacks by checking for the zero-value js.Func before calling Release (e.g., if conn.wsHandlerFn != (js.Func{}) { conn.wsHandlerFn.Release() } and similarly for onMessageFn and onCloseFn). Ensure the new once field is used everywhere that may call cleanup (handleDirectRDP defer, onCloseFn handler, setupTLSConnection cancel paths) so cleanupConnection runs only once and only releases initialized js.Funcs; this prevents syscall/js: Release of already released Func and crashes from releasing zero-value js.Funcs.
🧹 Nitpick comments (2)
client/wasm/internal/rdp/rdcleanpath.go (1)
96-105: Pre‑existing:proxyIDderived fromlen(activeConnections)can collide under concurrentCreateProxycalls.
len(p.activeConnections)is read without the mutex held, and more importantly the counter is reused once a connection is cleaned up (it decrements). Two concurrent/subsequentCreateProxycalls can produce the sameproxyID, which now also collides inpendingHandlersanddestinations, and willjs.Global().Setthe same global name — overwriting the priorhandlerFnand leaking it (noRelease).Since the PR's new maps key off
proxyID, the collision blast radius is slightly larger than before. Not strictly in scope, but worth considering a UUID or monotonic counter:-proxyID := fmt.Sprintf("proxy_%d", len(p.activeConnections)) +proxyID := fmt.Sprintf("proxy_%s", uuid.NewString())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/wasm/internal/rdp/rdcleanpath.go` around lines 96 - 105, proxyID generation using len(p.activeConnections) in CreateProxy is racy and can collide; compute a unique id under p.mu and use a monotonic counter (e.g., p.nextProxyID) or a UUID instead of len(...) so concurrent CreateProxy calls cannot produce the same proxyID. Acquire p.mu before generating proxyID, increment the counter (or generate UUID) to derive proxyID, then set p.destinations[proxyID], p.pendingHandlers, and call js.Global().Set for handlerFn while still holding consistent ownership; ensure any existing code that expects numeric IDs uses the new monotonic counter field (e.g., p.nextProxyID) or string UUID and initialize that field where p is constructed.client/wasm/internal/rdp/cert_validation.go (1)
52-65: Thedefer Release()pattern here is safe from a panic perspective, but the approach has a subtle inefficiency.When the 60s timeout fires at line 77,
validateCertificateWithJSreturns and releases the handlers. If the user accepts/rejects after the timeout (61s+), JavaScript invokes the released callbacks, which log a safe error to the browser console rather than panic. However, the resultChan/errorChan writes are then lost because the timeout goroutine has already moved on.While the buffered channel prevents blocking, the current pattern discards late callbacks silently. Consider releasing from within the callbacks instead—at the cost of a trivial leak if the promise never settles—to make the timeout semantics explicit and avoid silent discards:
Alternative approach: release from within handlers
resultChan := make(chan bool, 1) errorChan := make(chan error, 1) - thenFn := js.FuncOf(func(this js.Value, args []js.Value) interface{} { - result := args[0].Bool() - resultChan <- result - return nil - }) - defer thenFn.Release() - - catchFn := js.FuncOf(func(this js.Value, args []js.Value) interface{} { - errorChan <- fmt.Errorf("certificate validation failed") - return nil - }) - defer catchFn.Release() - + var thenFn, catchFn js.Func + var releaseOnce sync.Once + release := func() { + releaseOnce.Do(func() { + thenFn.Release() + catchFn.Release() + }) + } + thenFn = js.FuncOf(func(this js.Value, args []js.Value) interface{} { + defer release() + resultChan <- args[0].Bool() + return nil + }) + catchFn = js.FuncOf(func(this js.Value, args []js.Value) interface{} { + defer release() + errorChan <- fmt.Errorf("certificate validation failed") + return nil + }) promise.Call("then", thenFn).Call("catch", catchFn)Requires adding
"sync"to imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/wasm/internal/rdp/cert_validation.go` around lines 52 - 65, The current pattern defers thenFn.Release() and catchFn.Release(), which can lead to silently discarded late callbacks after the timeout; change to releasing the JS handlers from inside the callbacks using a sync.Once to ensure they are only released once: remove the defer Release() calls, add "sync" to imports and a var once sync.Once, and in both thenFn and catchFn call once.Do(func(){ thenFn.Release(); catchFn.Release() }) before sending to resultChan/errorChan so late JS invocations explicitly release handlers and avoid silent discards.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@client/wasm/internal/rdp/rdcleanpath.go`:
- Around line 110-125: The handlerFn stored in p.pendingHandlers and registered
on js.Global is leaked if CreateProxy runs but HandleWebSocketConnection never
executes; add a revocation path (e.g., implement func (p *RDCleanPathProxy)
revokeProxy(proxyID string)) that locks p.mu, removes p.pendingHandlers[proxyID]
and p.destinations[proxyID], then (if the js.Func existed) calls
js.Global().Delete(fmt.Sprintf("handleRDCleanPathWebSocket_%s", proxyID)) and
fn.Release(); wire this revokeProxy so callers can drop orphaned proxies (either
expose it to JS or call it from any timeout/parent-context cancellation path)
and ensure existing cleanupConnection/HandleWebSocketConnection still call it
where appropriate.
- Around line 300-310: In cleanupConnection, detach the WebSocket handlers
before releasing the js.Funcs: find where conn.ws (the ws js.Value set up with
onGoMessage and onGoClose) is referenced and either removeEventListener for the
corresponding functions or set ws.Set("onmessage", js.Undefined()) and
ws.Set("onclose", js.Undefined()) to prevent JavaScript from calling released
js.Funcs; then release conn.onMessageFn and conn.onCloseFn and proceed with
deleting conn.wsHandlerFn,
js.Global().Delete(fmt.Sprintf("handleRDCleanPathWebSocket_%s", conn.id)), and
removing entries from p.activeConnections, p.destinations, and
p.pendingHandlers.
In `@client/wasm/internal/ssh/handlers.go`:
- Around line 55-60: The exported jsInterface still holds references to released
js.Funcs causing a panic when JS calls write/resize/close after readLoop
returns; before calling writeFunc.Release(), resizeFunc.Release(), and
closeFunc.Release() in the goroutine that wraps readLoop, first clear the
properties on jsInterface (e.g., set jsInterface.Get("write"), Get("resize"),
Get("close") to js.Undefined() or delete them) so JS callers get a normal
undefined/function error instead of triggering "syscall/js: use of released
function"; update the goroutine around readLoop (and any onclose handler
cleanup) to clear jsInterface.write/resize/close prior to calling Release on
writeFunc, resizeFunc, and closeFunc.
---
Outside diff comments:
In `@client/wasm/internal/rdp/rdcleanpath.go`:
- Around line 283-311: Add idempotent cleanup to avoid double-release panics by
adding a sync.Once field (e.g., cleanupOnce) to proxyConnection and wrap the
body of RDCleanPathProxy.cleanupConnection in conn.cleanupOnce.Do(func(){ ...
}). Also guard releasing JS callbacks by checking for the zero-value js.Func
before calling Release (e.g., if conn.wsHandlerFn != (js.Func{}) {
conn.wsHandlerFn.Release() } and similarly for onMessageFn and onCloseFn).
Ensure the new once field is used everywhere that may call cleanup
(handleDirectRDP defer, onCloseFn handler, setupTLSConnection cancel paths) so
cleanupConnection runs only once and only releases initialized js.Funcs; this
prevents syscall/js: Release of already released Func and crashes from releasing
zero-value js.Funcs.
---
Nitpick comments:
In `@client/wasm/internal/rdp/cert_validation.go`:
- Around line 52-65: The current pattern defers thenFn.Release() and
catchFn.Release(), which can lead to silently discarded late callbacks after the
timeout; change to releasing the JS handlers from inside the callbacks using a
sync.Once to ensure they are only released once: remove the defer Release()
calls, add "sync" to imports and a var once sync.Once, and in both thenFn and
catchFn call once.Do(func(){ thenFn.Release(); catchFn.Release() }) before
sending to resultChan/errorChan so late JS invocations explicitly release
handlers and avoid silent discards.
In `@client/wasm/internal/rdp/rdcleanpath.go`:
- Around line 96-105: proxyID generation using len(p.activeConnections) in
CreateProxy is racy and can collide; compute a unique id under p.mu and use a
monotonic counter (e.g., p.nextProxyID) or a UUID instead of len(...) so
concurrent CreateProxy calls cannot produce the same proxyID. Acquire p.mu
before generating proxyID, increment the counter (or generate UUID) to derive
proxyID, then set p.destinations[proxyID], p.pendingHandlers, and call
js.Global().Set for handlerFn while still holding consistent ownership; ensure
any existing code that expects numeric IDs uses the new monotonic counter field
(e.g., p.nextProxyID) or string UUID and initialize that field where p is
constructed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 632871ee-894b-437b-bbc0-1458d0e81c5a
📒 Files selected for processing (3)
client/wasm/internal/rdp/cert_validation.goclient/wasm/internal/rdp/rdcleanpath.goclient/wasm/internal/ssh/handlers.go
| handlerFn := js.FuncOf(func(_ js.Value, args []js.Value) any { | ||
| if len(args) < 1 { | ||
| return js.ValueOf("error: requires WebSocket argument") | ||
| } | ||
|
|
||
| ws := args[0] | ||
| p.HandleWebSocketConnection(ws, proxyID) | ||
| return nil | ||
| })) | ||
| }) | ||
| p.mu.Lock() | ||
| if p.pendingHandlers == nil { | ||
| p.pendingHandlers = make(map[string]js.Func) | ||
| } | ||
| p.pendingHandlers[proxyID] = handlerFn | ||
| p.mu.Unlock() | ||
| js.Global().Set(fmt.Sprintf("handleRDCleanPathWebSocket_%s", proxyID), handlerFn) |
There was a problem hiding this comment.
handlerFn in pendingHandlers leaks if the proxy is never connected to.
If CreateProxy is called but HandleWebSocketConnection never runs (user navigates away, connection fails JS-side before reaching the handler), handlerFn stays pinned in pendingHandlers and registered on js.Global() forever — cleanupConnection only runs when there's an active proxyConnection. This is a smaller leak than the pre‑PR state, but the fix is incomplete.
Consider exposing a "revoke proxy" entry point callable from JS (or tied to a parent context) that drops orphaned entries:
func (p *RDCleanPathProxy) revokeProxy(proxyID string) {
p.mu.Lock()
fn, ok := p.pendingHandlers[proxyID]
delete(p.pendingHandlers, proxyID)
delete(p.destinations, proxyID)
p.mu.Unlock()
if ok {
js.Global().Delete(fmt.Sprintf("handleRDCleanPathWebSocket_%s", proxyID))
fn.Release()
}
}Acceptable to defer to a follow‑up if orphaned proxies are rare in practice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/wasm/internal/rdp/rdcleanpath.go` around lines 110 - 125, The
handlerFn stored in p.pendingHandlers and registered on js.Global is leaked if
CreateProxy runs but HandleWebSocketConnection never executes; add a revocation
path (e.g., implement func (p *RDCleanPathProxy) revokeProxy(proxyID string))
that locks p.mu, removes p.pendingHandlers[proxyID] and p.destinations[proxyID],
then (if the js.Func existed) calls
js.Global().Delete(fmt.Sprintf("handleRDCleanPathWebSocket_%s", proxyID)) and
fn.Release(); wire this revokeProxy so callers can drop orphaned proxies (either
expose it to JS or call it from any timeout/parent-context cancellation path)
and ensure existing cleanupConnection/HandleWebSocketConnection still call it
where appropriate.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check current state of rdcleanpath.go for revokeProxy or similar cleanup
rg -n "revokeProxy\|RevokeProxy\|revoke\|Revoke\|timeout\|TTL\|expire\|Expire\|CleanupOrphaned\|cleanupOrphaned" client/wasm/internal/rdp/rdcleanpath.go
echo "---"
# Check if revokeProxy is exposed to JS
rg -n "revokeProxy\|RevokeProxy" client/wasm/
echo "---"
# Show CreateProxy function to see if a timeout/parent-context path was added
grep -n "CreateProxy\|revokeProxy\|pendingHandlers\|time\.\|After\|Timer\|ctx" client/wasm/internal/rdp/rdcleanpath.go | head -60
Length of output: 1058
@lixmal, thanks for the update. Looking at the current code, the connected path is now clean — HandleWebSocketConnection transfers handlerFn into conn.wsHandlerFn and cleanupConnection releases everything properly.
The remaining gap (already documented in the code comment at lines 93–97) is the never-connected orphan path: if CreateProxy is called but HandleWebSocketConnection never runs (e.g., the user navigates away before IronRDP opens the WebSocket), handlerFn and its js.Global() registration stay pinned for the page lifetime. A revokeProxy(proxyID) callable from JS — or a short TTL goroutine in CreateProxy — would close this.
Would you like me to open a follow-up GitHub issue to track the orphaned-proxy leak?
🧠 Learnings used
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5826
File: client/internal/portforward/manager.go:35-38
Timestamp: 2026-04-08T14:55:48.251Z
Learning: In `client/internal/portforward/manager.go` (netbirdio/netbird), persisting permanent-lease (ttl=0/UPnP error 725 fallback) port mapping state for crash-recovery cleanup is intentionally deferred to a follow-up PR. The blocker is that `State.Cleanup` requires NAT gateway re-discovery via `nat.DiscoverGateway`, which can block startup for ~10 seconds when no gateway is present, affecting all clients. The TODO comment at line ~35 documents this constraint. Do not flag the missing state persistence for permanent leases as a blocking issue in this PR.
Learnt from: fpenezic
Repo: netbirdio/netbird PR: 5866
File: client/internal/peer/worker_relay.go:71-80
Timestamp: 2026-04-22T22:06:04.227Z
Learning: In netbirdio/netbird (`client/internal/peer/worker_relay.go`), `WorkerRelay.OnNewOffer` fires on ALL offer/answer exchanges (normal re-keying included), NOT only after Guard detects a disconnect. Unconditionally calling `CloseConnByPeerKey` when `OpenConn` returns `ErrConnAlreadyExists` inside `OnNewOffer` causes healthy relay connections to be torn down, producing an infinite reconnect loop (observed at 3600+ iterations/hour in multi-peer scenarios). A proper staleness check (e.g., an explicit stale flag set by WGWatcher before queuing the offer, or a health probe on the existing conn) is required before closing the existing connection.
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5031
File: client/ssh/proxy/proxy.go:537-559
Timestamp: 2026-01-05T06:33:50.474Z
Learning: The NetBird SSH proxy (client/ssh/proxy/proxy.go) is ephemeral and tied to a single SSH client session. It operates through stdio and exists only for the lifetime of one client connection, so session-scoped contexts are appropriate for the proxy's lifecycle.
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5530
File: proxy/server.go:930-941
Timestamp: 2026-03-07T13:25:18.409Z
Learning: In proxy/server.go, `modifyMapping()` intentionally removes old routes before setting up new ones. Old routes hold the port listener, so attempting to bind the same port without first releasing it would result in an address-in-use error. The brief service interruption during modify is an accepted design trade-off, not a bug.
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5530
File: management/internals/modules/reverseproxy/service/manager/manager.go:302-317
Timestamp: 2026-03-07T14:48:20.330Z
Learning: In `management/internals/modules/reverseproxy/service/manager/manager.go`, `checkPortConflict` uses `LockingStrengthUpdate` (SELECT FOR UPDATE) to serialize on existing port rows. For an unused port (no existing rows to lock), two concurrent transactions could both pass the DB check, but the proxy's OS-level port binding acts as the hard uniqueness guard — only one process can bind a given port. This dual-layer enforcement (DB best-effort + proxy binding) is the accepted design; no additional DB uniqueness constraint is needed.
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5587
File: shared/management/http/api/openapi.yml:3151-3171
Timestamp: 2026-03-13T06:22:59.436Z
Learning: Repo netbirdio/netbird PR `#5587`: For reverse proxy header authentication, HeaderAuthConfig.value is required in requests and cleared in responses. On update, Manager.preserveExistingAuthSecrets() calls preserveHeaderAuthHashes() before validateHeaderAuthValues() (management/internals/modules/reverseproxy/service/manager/manager.go Lines ~555–597), so sending value: "" preserves the existing hashed secret; providing a non-empty value rotates it.
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5891
File: util/capture/filter.go:164-183
Timestamp: 2026-04-15T10:26:03.137Z
Learning: In `util/capture/filter.go` (netbirdio/netbird, PR `#5891`), `parseIPv4Info` does not guard L4 port extraction on fragment offset == 0. Non-initial IPv4 fragments will have their payload bytes misread as src/dst ports, which can cause spurious matches for port-based filters. The fix (checking `binary.BigEndian.Uint16(data[6:8]) & 0x1fff == 0` before reading ports) was deferred to a future PR. Do not treat the missing fragment-offset guard as a blocking issue for PR `#5891`.
Learnt from: bcmmbaga
Repo: netbirdio/netbird PR: 5441
File: management/server/user.go:745-749
Timestamp: 2026-02-24T19:32:13.189Z
Learning: In Go codebases like netbirdio/netbird, methods returning (T, error) should follow the convention: if an error is returned, propagate it and return early. When the caller handles the error immediately, explicit nil checks on the returned value are unnecessary; rely on the error to guard control flow and only inspect the value when err is nil. This guidance applies broadly to Store methods such as GetUserByUserID in Go code; prefer early return on error and avoid redundant nil checks after a successful error path.
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5530
File: proxy/internal/udp/relay.go:276-278
Timestamp: 2026-03-07T13:59:16.506Z
Learning: For Go 1.25+ projects, use sync.WaitGroup.Go() to run goroutines because it handles Add(1) and defer Done() automatically. Do not flag wg.Go(...) as a compilation error in codebases that require Go 1.25 or newer. If supporting older Go versions, gate such usage with build tags or avoid wg.Go() to maintain compatibility; verify the module targets Go 1.25+ before adopting this pattern.
Learnt from: lixmal
Repo: netbirdio/netbird PR: 5688
File: client/firewall/uspfilter/forwarder/icmp.go:268-287
Timestamp: 2026-03-25T09:21:53.814Z
Learning: When using gVisor’s `header.ICMPv6Checksum(ICMPv6ChecksumParams)` (`gvisor.dev/gvisor/pkg/tcpip/header`), the pseudo-header checksum is computed internally from `Src`, `Dst`, and `Header`. Use `PayloadCsum`/`PayloadLen` only for scattered-buffer cases where the ICMPv6 payload bytes are not included in `Header`. If `Header` already contains the complete ICMPv6 message (ICMPv6 header + all data), call `ICMPv6Checksum` with `PayloadCsum: 0` and `PayloadLen: 0` (or omit those fields). Do not pass a separately computed pseudo-header checksum (e.g., `header.PseudoHeaderChecksum(...)`) as `PayloadCsum` in that case, because it will be double-counted and produce an incorrect checksum.
5ec3c56 to
a3eec0e
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
client/wasm/internal/rdp/rdcleanpath.go (1)
98-106:⚠️ Potential issue | 🟡 Minor
proxyIDgeneration is racy and can collide — new pendingHandler leak vector.
len(p.activeConnections)is read at line 99 without holdingp.mu, so it races with otherHandleWebSocketConnection/cleanupConnectionmutations of the map. More importantly,activeConnectionsis only populated byHandleWebSocketConnection, so two sequentialCreateProxycalls issued before either WebSocket connects both observelen == 0and both generateproxy_0. On collision:
destinations["proxy_0"]is overwritten → first proxy's destination is lost.pendingHandlers["proxy_0"]is overwritten → the firsthandlerFnis orphaned with no path toRelease()(onlycleanupConnectionreleases it, and it never runs for a proxy whose handler got shadowed).js.Global().handleRDCleanPathWebSocket_proxy_0is overwritten.This defeats the leak fix the PR is introducing. Use a monotonic counter under the lock (or
atomic.Uint64).🛠️ Proposed fix
type RDCleanPathProxy struct { nbClient interface { Dial(ctx context.Context, network, address string) (net.Conn, error) } activeConnections map[string]*proxyConnection destinations map[string]string pendingHandlers map[string]js.Func + nextProxyID uint64 mu sync.Mutex } @@ return js.Global().Get("Promise").New(js.FuncOf(func(_ js.Value, args []js.Value) any { resolve := args[0] go func() { - proxyID := fmt.Sprintf("proxy_%d", len(p.activeConnections)) - p.mu.Lock() + proxyID := fmt.Sprintf("proxy_%d", p.nextProxyID) + p.nextProxyID++ if p.destinations == nil { p.destinations = make(map[string]string) } p.destinations[proxyID] = destination p.mu.Unlock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/wasm/internal/rdp/rdcleanpath.go` around lines 98 - 106, The proxyID generation using fmt.Sprintf("proxy_%d", len(p.activeConnections)) is racy and can collide because len(p.activeConnections) is read without p.mu, causing overwritten destinations/pendingHandlers and leaked pending handler fns; fix by replacing this scheme with a monotonic counter accessed under p.mu (or an atomic.Uint64) to generate unique IDs, i.e., add/maintain a counter field on the struct or use atomic increment, acquire p.mu when assigning the new proxyID and updating p.destinations so the ID, destinations map update and pendingHandlers registration are atomic with respect to HandleWebSocketConnection and cleanupConnection to prevent collisions and leaks (refer to proxyID, p.activeConnections, p.mu, p.destinations, pendingHandlers, HandleWebSocketConnection, cleanupConnection, CreateProxy).
🧹 Nitpick comments (1)
client/wasm/internal/rdp/rdcleanpath.go (1)
85-89: InitializedestinationsandpendingHandlersin the constructor for consistency.Right now
activeConnectionsis initialized up front whiledestinationsandpendingHandlersare lazily initialized insideCreateProxyunder the lock, forcingif x == nil { x = make(...) }boilerplate and leaving room for nil-map writes on future call sites that forget the guard. Preallocating here removes the check and makes the struct invariants clearer.♻️ Proposed refactor
return &RDCleanPathProxy{ nbClient: client, activeConnections: make(map[string]*proxyConnection), + destinations: make(map[string]string), + pendingHandlers: make(map[string]js.Func), }And drop the lazy
if p.destinations == nil { ... }/if p.pendingHandlers == nil { ... }blocks inCreateProxy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/wasm/internal/rdp/rdcleanpath.go` around lines 85 - 89, The RDCleanPathProxy constructor currently initializes activeConnections but leaves destinations and pendingHandlers nil, causing boilerplate nil-checks and risking nil-map writes; update the constructor that returns *RDCleanPathProxy to also initialize destinations and pendingHandlers with make(map[...]) so the struct invariants are set up up-front, then remove the now-unnecessary lazy nil guards inside CreateProxy (the if p.destinations == nil / if p.pendingHandlers == nil blocks) and rely on the preallocated maps instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@client/wasm/internal/rdp/rdcleanpath.go`:
- Around line 98-106: The proxyID generation using fmt.Sprintf("proxy_%d",
len(p.activeConnections)) is racy and can collide because
len(p.activeConnections) is read without p.mu, causing overwritten
destinations/pendingHandlers and leaked pending handler fns; fix by replacing
this scheme with a monotonic counter accessed under p.mu (or an atomic.Uint64)
to generate unique IDs, i.e., add/maintain a counter field on the struct or use
atomic increment, acquire p.mu when assigning the new proxyID and updating
p.destinations so the ID, destinations map update and pendingHandlers
registration are atomic with respect to HandleWebSocketConnection and
cleanupConnection to prevent collisions and leaks (refer to proxyID,
p.activeConnections, p.mu, p.destinations, pendingHandlers,
HandleWebSocketConnection, cleanupConnection, CreateProxy).
---
Nitpick comments:
In `@client/wasm/internal/rdp/rdcleanpath.go`:
- Around line 85-89: The RDCleanPathProxy constructor currently initializes
activeConnections but leaves destinations and pendingHandlers nil, causing
boilerplate nil-checks and risking nil-map writes; update the constructor that
returns *RDCleanPathProxy to also initialize destinations and pendingHandlers
with make(map[...]) so the struct invariants are set up up-front, then remove
the now-unnecessary lazy nil guards inside CreateProxy (the if p.destinations ==
nil / if p.pendingHandlers == nil blocks) and rely on the preallocated maps
instead.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d5f77030-29c5-42b5-97ae-5b91ecb72301
📒 Files selected for processing (3)
client/wasm/internal/rdp/cert_validation.goclient/wasm/internal/rdp/rdcleanpath.goclient/wasm/internal/ssh/handlers.go
🚧 Files skipped from review as they are similar to previous changes (1)
- client/wasm/internal/ssh/handlers.go
a3eec0e to
b6bd6b9
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
client/wasm/internal/rdp/cert_validation.go (1)
49-79:⚠️ Potential issue | 🟡 MinorTimeout path releases
thenFn/catchFnwhile the Promise chain is still pending — late JS invocation will log a console error and drop the callback.The
<-time.After(...)branch returns and both deferredRelease()s fire while the Promise chain registered at line 65 is still pending. If the user accepts/reject after the 60s timeout (or any async JS resolution), the JS runtime invokes released Go funcs. Rather than panicking,syscall/jsdetects the released function, logs "call to released function" to the JavaScript console, and returns without executing the Go handler. This silently drops the callback with a console error instead of a crash. The buffered channels prevent blocking, but they don't prevent the Promise from holding stale references.Consider releasing only after the Promise settles. One approach: move
Release()calls into the callbacks themselves using async.Onceguard to ensure they execute exactly once, regardless of which callback fires or the timing of timeout.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 882467fe-b13c-411c-8007-3f899827ed24
📒 Files selected for processing (3)
client/wasm/internal/rdp/cert_validation.goclient/wasm/internal/rdp/rdcleanpath.goclient/wasm/internal/ssh/handlers.go
b6bd6b9 to
b786534
Compare
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
client/wasm/internal/rdp/rdcleanpath.go (1)
87-91: InitializedestinationsandpendingHandlersin the constructor for consistency.
activeConnectionsis initialized here, butdestinationsandpendingHandlersrely on lazy init insideCreateProxy(lines 108-110, 127-129). The lazy init is functionally fine (writes are guarded, anddelete/reads on a nil map are safe in the cleanup path), but initializing all three maps together reduces cognitive load and removes a foot-gun if a future caller touches these maps outsideCreateProxy.♻️ Proposed refactor
return &RDCleanPathProxy{ nbClient: client, activeConnections: make(map[string]*proxyConnection), + destinations: make(map[string]string), + pendingHandlers: make(map[string]js.Func), }And drop the nil-guards in
CreateProxy:p.mu.Lock() - if p.destinations == nil { - p.destinations = make(map[string]string) - } p.destinations[proxyID] = destination p.mu.Unlock()p.mu.Lock() - if p.pendingHandlers == nil { - p.pendingHandlers = make(map[string]js.Func) - } p.pendingHandlers[proxyID] = handlerFn p.mu.Unlock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/wasm/internal/rdp/rdcleanpath.go` around lines 87 - 91, RDCleanPathProxy constructor only initializes activeConnections but leaves destinations and pendingHandlers nil; update the RDCleanPathProxy initializer to also allocate destinations and pendingHandlers maps so all three (activeConnections, destinations, pendingHandlers) are created consistently, then remove the nil-guards in CreateProxy that check/allocate destinations and pendingHandlers (refer to RDCleanPathProxy, CreateProxy, destinations, pendingHandlers, activeConnections) to rely on the constructor-initialized maps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@client/wasm/internal/rdp/rdcleanpath.go`:
- Around line 87-91: RDCleanPathProxy constructor only initializes
activeConnections but leaves destinations and pendingHandlers nil; update the
RDCleanPathProxy initializer to also allocate destinations and pendingHandlers
maps so all three (activeConnections, destinations, pendingHandlers) are created
consistently, then remove the nil-guards in CreateProxy that check/allocate
destinations and pendingHandlers (refer to RDCleanPathProxy, CreateProxy,
destinations, pendingHandlers, activeConnections) to rely on the
constructor-initialized maps.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5299d2ba-3817-4bde-8a76-0d8e27c2114b
📒 Files selected for processing (3)
client/wasm/internal/rdp/cert_validation.goclient/wasm/internal/rdp/rdcleanpath.goclient/wasm/internal/ssh/handlers.go
🚧 Files skipped from review as they are similar to previous changes (1)
- client/wasm/internal/ssh/handlers.go



Describe your changes
js.FuncOfpins the Go closure in the syscall/js callback table untilRelease()is called. The existing WASM SSH / RDP code creates per-connectionjs.Funccallbacks (write,resize,close,onGoMessage,onGoClose, the RDCleanPath per-proxy handshake handler, cert-validationthen/catch) without ever releasing them, so each session permanently leaks the captured Go closure plus its JS wrapper. This PR releases them at the correct teardown point.write/resize/closeas namedjs.Funcin SSH and release them when the read loop exits.onGoMessage/onGoCloseincleanupConnection. Also delete thehandleRDCleanPathWebSocket_<id>global and clean updestinations/pendingHandlersentries.then/catchcallbacks when the promise resolves. Channels made buffered so thedefer Releaseis still safe on the timeout path.Issue ticket number and link
Stack
Checklist
Documentation
Select exactly one:
Internal WASM memory-management fix, no public-facing docs page.
Docs PR URL (required if "docs added" is checked)
Paste the PR link from https://github.com/netbirdio/docs here:
https://github.com/netbirdio/docs/pull/__
Summary by CodeRabbit