Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds broad IPv6 overlay support and a DisableIPv6 toggle across client, device, engine, DNS, firewall, netflow, SSH, management, and protobufs; migrates many IP types to netip/wgaddr, introduces compact netip encode/decode utilities, and generalizes ACL/NAT/routing/firewall handling to dual‑stack. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (daemon/UI)
participant Engine as Engine (client/internal)
participant Device as Device/TUN
participant Management as Management Server
participant DNS as DNS subsystem
Client->>Engine: Start with EngineConfig{WgAddr: wgaddr.Address, DisableIPv6: bool}
Engine->>Device: Create/Update interface (provide wgaddr.Address with optional IPv6)
Device-->>Engine: Respond assigned addresses (v4 and optional v6) or clear IPv6 on failure
Engine->>Management: Login/register with SystemInfo (Flags.DisableIPv6, Capabilities)
Management-->>Client: Send PeerConfig (address_v6, sourcePrefixes, capabilities)
Client->>Engine: Apply peer AllowedIPs / sourcePrefixes filtered by local IPv6 availability
Client->>DNS: Generate reverse zones and PTRs for v4 and v6 based on assigned addresses
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
shared/management/proto/management.proto (1)
338-340: Proto field addition looks good.The
address_v6field appropriately extendsPeerConfigfor dual-stack peer support.Minor documentation consideration: The comment says "16 bytes IP + 1 byte prefix length" which matches the compact format, but for a peer's overlay address the prefix length is typically implied (e.g., always
/128for IPv6). Consider clarifying whether the prefix length byte is meaningful here or if it could be simplified to just 16 bytes for the address sinceDecodeAddr(fromnetiputil) discards the prefix length anyway.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/management/proto/management.proto` around lines 338 - 340, The comment for the new bytes field address_v6 on PeerConfig is ambiguous about the prefix byte; either explicitly document that the extra prefix-length byte is ignored by DecodeAddr (netiputil) or remove it and change the wire-format expectation to a plain 16-byte IPv6 address. Update the proto comment for address_v6 to state the intended format (e.g., "16 bytes IPv6 address, no prefix byte" or "16 bytes IP + 1 byte prefix (prefix is meaningful)"), and if you choose to remove the prefix byte, update any decoding/encoding logic that reads/writes address_v6 (search for DecodeAddr and usages in netiputil and consumers of PeerConfig) to handle the 16-byte-only form. Ensure the comment and code for address_v6, PeerConfig, and DecodeAddr/netiputil are consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@shared/netiputil/compact.go`:
- Around line 24-42: The Gosec G602 false positives in DecodePrefix arise from
slice access checks that are actually guarded by the switch on len(b); suppress
them by adding a nolint directive: annotate the relevant slice-access lines in
DecodePrefix (the cases handling length 5 and 17, i.e., the uses of b[:4], b[4],
b[:16], and b[16]) with "//nolint:gosec // G602 false positive: length is
checked by switch case" so the analyzer ignores these safe accesses while
preserving the existing guard logic.
---
Nitpick comments:
In `@shared/management/proto/management.proto`:
- Around line 338-340: The comment for the new bytes field address_v6 on
PeerConfig is ambiguous about the prefix byte; either explicitly document that
the extra prefix-length byte is ignored by DecodeAddr (netiputil) or remove it
and change the wire-format expectation to a plain 16-byte IPv6 address. Update
the proto comment for address_v6 to state the intended format (e.g., "16 bytes
IPv6 address, no prefix byte" or "16 bytes IP + 1 byte prefix (prefix is
meaningful)"), and if you choose to remove the prefix byte, update any
decoding/encoding logic that reads/writes address_v6 (search for DecodeAddr and
usages in netiputil and consumers of PeerConfig) to handle the 16-byte-only
form. Ensure the comment and code for address_v6, PeerConfig, and
DecodeAddr/netiputil are consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 172f0e99-f851-4579-a9e4-d40f6310a23d
⛔ Files ignored due to path filters (1)
shared/management/proto/management.pb.gois excluded by!**/*.pb.go
📒 Files selected for processing (7)
client/internal/acl/manager.goclient/internal/debug/debug.gomanagement/internals/shared/grpc/conversion.gomanagement/server/peer_test.goshared/management/proto/management.protoshared/netiputil/compact.goshared/netiputil/compact_test.go
There was a problem hiding this comment.
🧹 Nitpick comments (1)
shared/netiputil/compact.go (1)
23-42: Consider validating prefix length bounds for robustness.The decoded prefix length (last byte) is not validated against valid bounds. For IPv4, valid bits are 0-32; for IPv6, 0-128. If the input contains out-of-range values,
netip.PrefixFromreturns an invalid prefix silently, requiring callers to checkIsValid().For a wire protocol decoder, explicit validation with a clear error message improves debuggability when malformed data is received.
🛡️ Proposed fix to validate prefix length
func DecodePrefix(b []byte) (netip.Prefix, error) { switch len(b) { case 5: var ip4 [4]byte copy(ip4[:], b) - return netip.PrefixFrom(netip.AddrFrom4(ip4), int(b[len(b)-1])), nil + bits := int(b[len(b)-1]) + if bits > 32 { + return netip.Prefix{}, fmt.Errorf("invalid IPv4 prefix length %d (max 32)", bits) + } + return netip.PrefixFrom(netip.AddrFrom4(ip4), bits), nil case 17: var ip6 [16]byte copy(ip6[:], b) addr := netip.AddrFrom16(ip6).Unmap() bits := int(b[len(b)-1]) + maxBits := 128 + if addr.Is4() { + maxBits = 32 + } + if bits > maxBits { + return netip.Prefix{}, fmt.Errorf("invalid prefix length %d (max %d)", bits, maxBits) + } if addr.Is4() && bits > 32 { bits = 32 } return netip.PrefixFrom(addr, bits), nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/netiputil/compact.go` around lines 23 - 42, DecodePrefix does not validate the prefix length byte and can produce an invalid netip.Prefix; update DecodePrefix to read the trailing byte into bits and explicitly validate bounds: for the 5-byte (IPv4) case ensure 0 <= bits <= 32 (return a clear fmt.Errorf if not), and for the 17-byte (IPv6) case ensure 0 <= bits <= 128 (return error if out of range); keep the existing addr.Unmap() and the special case that an IPv6-mapped IPv4 address should not claim >32 bits (either validate before or after Unmap and return an error if inconsistent), and only call netip.PrefixFrom when bits are within the validated range so callers never receive silently invalid prefixes from DecodePrefix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@shared/netiputil/compact.go`:
- Around line 23-42: DecodePrefix does not validate the prefix length byte and
can produce an invalid netip.Prefix; update DecodePrefix to read the trailing
byte into bits and explicitly validate bounds: for the 5-byte (IPv4) case ensure
0 <= bits <= 32 (return a clear fmt.Errorf if not), and for the 17-byte (IPv6)
case ensure 0 <= bits <= 128 (return error if out of range); keep the existing
addr.Unmap() and the special case that an IPv6-mapped IPv4 address should not
claim >32 bits (either validate before or after Unmap and return an error if
inconsistent), and only call netip.PrefixFrom when bits are within the validated
range so callers never receive silently invalid prefixes from DecodePrefix.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4c0facdd-fccf-46f0-b534-18bb5d96c612
📒 Files selected for processing (1)
shared/netiputil/compact.go
There was a problem hiding this comment.
🧹 Nitpick comments (1)
shared/netiputil/compact.go (1)
16-21: Consider validating input prefix.The implementation is correct for valid prefixes. However, if an invalid/zero
netip.Prefixis passed,p.Bits()returns -1 (becomes 255 as byte) and a zeroAddrproduces a 1-byte result. Both would fail on decode, so it's fail-safe, but explicit validation could provide clearer error messages at the source.💡 Optional: Add validation for invalid prefixes
// EncodePrefix encodes a netip.Prefix into compact bytes. // The address is always unmapped before encoding. -func EncodePrefix(p netip.Prefix) []byte { +func EncodePrefix(p netip.Prefix) ([]byte, error) { + if !p.IsValid() { + return nil, fmt.Errorf("invalid prefix") + } addr := p.Addr().Unmap() - return append(addr.AsSlice(), byte(p.Bits())) + return append(addr.AsSlice(), byte(p.Bits())), nil }Note: This would require updating callers to handle the error. Given the current usage context where callers likely pass valid prefixes, keeping the current signature may be acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/netiputil/compact.go` around lines 16 - 21, Validate the input prefix in EncodePrefix by checking netip.Prefix validity and bit length before encoding: change EncodePrefix(p netip.Prefix) to return ([]byte, error), verify p.IsValid() and p.Bits() >= 0 (or other appropriate checks) and return a descriptive error when invalid instead of producing a malformed byte slice; update all callers to handle the error accordingly so decoding won't receive impossible values like 255 bits or a one-byte address.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@shared/netiputil/compact.go`:
- Around line 16-21: Validate the input prefix in EncodePrefix by checking
netip.Prefix validity and bit length before encoding: change EncodePrefix(p
netip.Prefix) to return ([]byte, error), verify p.IsValid() and p.Bits() >= 0
(or other appropriate checks) and return a descriptive error when invalid
instead of producing a malformed byte slice; update all callers to handle the
error accordingly so decoding won't receive impossible values like 255 bits or a
one-byte address.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 58f63604-5572-410f-90a1-859c6db3bd11
📒 Files selected for processing (2)
shared/netiputil/compact.goshared/netiputil/compact_test.go
✅ Files skipped from review due to trivial changes (1)
- shared/netiputil/compact_test.go
PeerCapability is reported in PeerSystemMeta.capabilities on login/sync. Management uses it instead of version gating to determine client features. disableIPv6 in Flags lets users opt out of IPv6 overlay.
b7be56f to
e2f7748
Compare
…anic, v6 validation
…anic, v6 validation
# Conflicts: # client/iface/wgaddr/address.go
…anic, v6 validation
# Conflicts: # client/iface/wgaddr/address.go
The packet tracer resolved 'self' to the v4 overlay address unconditionally, causing "mixed address families" errors when tracing v6 traffic. Pick the self address matching the peer's address family. Add Engine.GetWgV6Addr() and rework parseAddress into resolveTraceAddresses which parses the non-self address first to determine the family, then resolves self accordingly.
# Conflicts: # management/server/types/networkmap_components.go
…outesToSync Extract deleteAccountUsers from DeleteAccount (complexity 21 -> ~14). Extract processResourcePolicies and getResourcePolicyPeers from getNetworkResourcesRoutesToSync (complexity 31 -> ~15). Fixes SonarCloud S3776 violations.
…sourcesRoutesToSync" This reverts commit 14a39f1.
# Conflicts: # management/server/types/networkmap_components.go
Resolve conflict in setupAndroidRoutes: merge IPv6 fake IP route with the explicit fake IP route storage from #5865. Notifier now stores a slice of fake IP routes (v4 + v6) via SetFakeIPRoutes to preserve the stale route re-injection fix.
# Conflicts: # client/firewall/iptables/manager_linux.go # client/firewall/nftables/manager_linux.go # client/firewall/nftables/router_linux.go
# Conflicts: # management/server/route_test.go # management/server/types/account.go # management/server/types/account_test.go # management/server/types/networkmap_comparison_test.go # management/server/types/networkmap_golden_test.go # management/server/types/networkmapbuilder.go
# Conflicts: # client/proto/daemon.pb.go
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
client/ui/network.go (1)
195-203:⚠️ Potential issue | 🟠 MajorUse one canonical default-route check in both paths.
Line 198 uses substring matching, while Line 502 uses exact matching. This can make the Networks tab and tray menu disagree on what is an exit node.
💡 Proposed fix
import ( "context" "fmt" + "net/netip" "runtime" "sort" "strings" "time" @@ func getExitNodeNetworks(routes []*proto.Network) []*proto.Network { var filteredRoutes []*proto.Network for _, route := range routes { - if strings.Contains(route.Range, "0.0.0.0/0") || route.Range == "::/0" { + if isDefaultRoute(route.Range) { filteredRoutes = append(filteredRoutes, route) } } return filteredRoutes } @@ func (s *serviceClient) getExitNodes(conn proto.DaemonServiceClient) ([]*proto.Network, error) { @@ var exitNodes []*proto.Network for _, network := range resp.Routes { - if network.Range == "0.0.0.0/0" || network.Range == "::/0" { + if isDefaultRoute(network.Range) { exitNodes = append(exitNodes, network) } } return exitNodes, nil } + +func isDefaultRoute(raw string) bool { + prefix, err := netip.ParsePrefix(strings.TrimSpace(raw)) + if err != nil { + return false + } + return prefix.Bits() == 0 +}Also applies to: 491-506
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/ui/network.go` around lines 195 - 203, getExitNodeNetworks currently uses strings.Contains to detect default routes while another codepath uses exact equality, causing inconsistent behavior; add a single canonical helper function (e.g., isDefaultRoute(routeRange string) bool) that returns true only for the exact strings "0.0.0.0/0" or "::/0", replace the strings.Contains check in getExitNodeNetworks with a call to isDefaultRoute, and replace the other exact-match check (the code that currently compares to "0.0.0.0/0" or "::/0") to also call isDefaultRoute so both the Networks tab and tray menu use the same logic.client/firewall/nftables/manager_linux.go (1)
178-186:⚠️ Potential issue | 🟠 MajorRollback the work table when
router.initfails.
createWorkTable()has already replaced the old table beforerouter.initruns. Ifrouter.initfails after creating any chains or rules, this branch exits withoutrollbackInit(), leaving partial nftables state behind.Suggested fix
if err := m.router.init(workTable); err != nil { + m.rollbackInit() return fmt.Errorf("router init: %w", err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/firewall/nftables/manager_linux.go` around lines 178 - 186, initFirewall replaces the work table via createWorkTable() but doesn't rollback when m.router.init(workTable) fails, leaving partial nftables state; update Manager.initFirewall to call m.rollbackInit() (or equivalent rollback method) if m.router.init(...) returns an error, and return a combined error that includes both the original router init error and any rollback error (so call rollback even on router init failure and surface rollback failures alongside the original error).client/firewall/iptables/router_linux.go (1)
1040-1057:⚠️ Potential issue | 🟡 MinorUse
protoForFamilyin OUTPUT DNAT too.This function now supports IPv6 addresses, but it still serializes the protocol with
string(protocol). That breaks family-specific names such as ICMP, where IPv6 needsicmpv6rather thanicmp.Suggested fix
dnatRule := []string{ - "-p", strings.ToLower(string(protocol)), + "-p", strings.ToLower(protoForFamily(protocol, r.v6)), "--dport", strconv.Itoa(int(originalPort)), "-d", localAddr.String(), "-j", "DNAT", "--to-destination", ":" + strconv.Itoa(int(translatedPort)), }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/firewall/iptables/router_linux.go` around lines 1040 - 1057, The OUTPUT DNAT rule in AddOutputDNAT currently lowercases string(protocol), which fails for family-specific protocol names (e.g., icmp vs icmpv6); update AddOutputDNAT to derive the IP family from localAddr and call protoForFamily(family, protocol) when building dnatRule instead of strings.ToLower(string(protocol)), ensuring the correct protocol token (like "icmpv6") is used; keep the rest of the rule construction and ruleID logic unchanged.
♻️ Duplicate comments (1)
client/internal/peer/status_test.go (1)
89-89:⚠️ Potential issue | 🟡 MinorDon’t ignore
AddPeererror in test setup (Line 89).This can hide setup failures and make the test pass for the wrong reason.
✅ Suggested fix
- _ = status.AddPeer(key, "abc.netbird", ip, "") + require.NoError(t, status.AddPeer(key, "abc.netbird", ip, ""))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/internal/peer/status_test.go` at line 89, The test currently ignores the error returned by status.AddPeer(key, "abc.netbird", ip, ""), which can mask setup failures; change the call to check the returned error and fail the test on error (e.g., err := status.AddPeer(...); if err != nil { t.Fatalf("AddPeer failed: %v", err) }) so setup failures are surfaced; ensure you reference the existing status.AddPeer invocation and use the test's t to fail.
🧹 Nitpick comments (13)
management/server/http/handlers/dns/nameservers_handler_test.go (1)
237-269: Add bracketed and malformed-bracket IPv6 cases to this table test.
TestToServerNSList_IPv6currently checks only unbracketed inputs. Since parsing now normalizes brackets, add[2001:4860:4860::8888](success) and unmatched bracket forms (error) to lock behavior.🧪 Suggested test extension
func TestToServerNSList_IPv6(t *testing.T) { tests := []struct { name string input []api.Nameserver expectIP netip.Addr + expectErr bool }{ { name: "IPv4", input: []api.Nameserver{ {Ip: "1.1.1.1", NsType: "udp", Port: 53}, }, expectIP: netip.MustParseAddr("1.1.1.1"), }, { name: "IPv6", input: []api.Nameserver{ {Ip: "2001:4860:4860::8888", NsType: "udp", Port: 53}, }, expectIP: netip.MustParseAddr("2001:4860:4860::8888"), }, + { + name: "IPv6 bracketed", + input: []api.Nameserver{ + {Ip: "[2001:4860:4860::8888]", NsType: "udp", Port: 53}, + }, + expectIP: netip.MustParseAddr("2001:4860:4860::8888"), + }, + { + name: "IPv6 malformed trailing bracket", + input: []api.Nameserver{ + {Ip: "2001:4860:4860::8888]", NsType: "udp", Port: 53}, + }, + expectErr: true, + }, } for _, tc := range tests { t.Run(tc.name, func(t *testing.T) { result, err := toServerNSList(tc.input) - assert.NoError(t, err) + if tc.expectErr { + assert.Error(t, err) + return + } + assert.NoError(t, err) if assert.Len(t, result, 1) { assert.Equal(t, tc.expectIP, result[0].IP) assert.Equal(t, 53, result[0].Port) } }) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/http/handlers/dns/nameservers_handler_test.go` around lines 237 - 269, Extend the TestToServerNSList_IPv6 table to include a bracketed IPv6 success case and malformed-bracket failure cases: add an input with Ip set to "[2001:4860:4860::8888]" and expect the same netip.MustParseAddr("2001:4860:4860::8888") and Port 53 (call to toServerNSList should succeed), and add inputs with unmatched brackets such as "[2001:4860:4860::8888" and "2001:4860:4860::8888]" where calling toServerNSList should return an error (use assert.Error/NoError accordingly); update the subtest assertions to check for error vs success instead of always asserting no error, referencing the TestToServerNSList_IPv6 test and the toServerNSList function.client/internal/lazyconn/activity/listener_bind.go (1)
68-99: Consider centralizing fake-IP byte-derivation logic.
deriveFakeIPandclient/iface/wgproxy/bind/proxy.go:fakeAddressnow implement parallel “last two bytes → 127.x.x.x” logic. A shared helper would reduce future behavior drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/internal/lazyconn/activity/listener_bind.go` around lines 68 - 99, deriveFakeIP currently duplicates the "last two bytes → 127.2.x.x" logic found in client/iface/wgproxy/bind/proxy.go's fakeAddress; centralize this by extracting a shared helper (e.g., BuildLoopbackFromLastTwoBytes or To127TwoByteLoopback) that takes a byte slice or netip.Addr and returns the netip.Addr loopback, then replace the logic in deriveFakeIP (the peerIP.As4()/raw[14..15] branches) and in fakeAddress to call that helper; ensure the helper handles IPv4/IPv6 inputs, validates lengths, and returns an error or zero addr on invalid input so existing callers' error handling (deriveFakeIP's final fmt.Errorf path) remains correct.management/server/store/sql_store_get_account_test.go (1)
151-153: Add explicit IPv6 assertions to match the new fixtures.You now seed IPv6 for all peers, but only IPv4 is asserted (Line 716). Add
IPv6checks soGetAccountregressions on v6 mapping are caught.✅ Suggested assertion additions
p1, exists := retrievedAccount.Peers[peerID1] require.True(t, exists, "Peer 1 should exist") assert.Equal(t, "Peer 1", p1.Name, "Peer 1 name mismatch") assert.Equal(t, "peer-key-1-AAAA", p1.Key, "Peer 1 key mismatch") assert.Equal(t, netip.MustParseAddr("100.64.0.1"), p1.IP, "Peer 1 IP mismatch") + assert.Equal(t, netip.MustParseAddr("fd00::1"), p1.IPv6, "Peer 1 IPv6 mismatch") @@ p2, exists := retrievedAccount.Peers[peerID2] require.True(t, exists, "Peer 2 should exist") + assert.Equal(t, netip.MustParseAddr("fd00::2"), p2.IPv6, "Peer 2 IPv6 mismatch") @@ p3, exists := retrievedAccount.Peers[peerID3] require.True(t, exists, "Peer 3 should exist") + assert.Equal(t, netip.MustParseAddr("fd00::3"), p3.IPv6, "Peer 3 IPv6 mismatch")Also applies to: 199-200, 237-238, 716-716
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/store/sql_store_get_account_test.go` around lines 151 - 153, The test seeds IPv6 addresses for peers but only asserts IPv4 values in the GetAccount-related tests; update the assertions in sql_store_get_account_test.go (the GetAccount tests) to also assert the IPv6 field for each peer entry. For each existing IPv4 assertion (e.g., comparisons against netip.MustParseAddr("100.64.0.1") etc.) add a matching IPv6 assertion comparing the record's IPv6 value to the expected netip.MustParseAddr("fd00::1") (and the corresponding IPv6 addresses used in other fixtures at the referenced locations), so GetAccount regressions on v6 mappings are caught.management/server/http/handlers/peers/peers_handler_test.go (1)
231-233: IPv6 fixtures are added, but IPv6 behavior is still not asserted.
Peer.IPv6is now populated in test setup, but these suites still only assert/updateIp(IPv4). Add explicit checks forgot.Ipv6and an IPv6 update request case to protect the new dual-stack path.🧪 Suggested test additions
- assert.Equal(t, tc.expectedPeer.IP.String(), got.Ip) + assert.Equal(t, tc.expectedPeer.IP.String(), got.Ip) + assert.Equal(t, tc.expectedPeer.IPv6.String(), got.Ipv6)tt := []struct { name string peerID string requestBody string callerUserID string expectedStatus int expectedIP string + expectedIPv6 string }{ { name: "update peer IP successfully", peerID: testPeerID, requestBody: `{"ip": "100.64.0.100"}`, callerUserID: adminUser, expectedStatus: http.StatusOK, expectedIP: "100.64.0.100", }, + { + name: "update peer IPv6 successfully", + peerID: testPeerID, + requestBody: `{"ipv6": "fd00::100"}`, + callerUserID: adminUser, + expectedStatus: http.StatusOK, + expectedIPv6: "fd00::100", + }, }Also applies to: 539-540
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/http/handlers/peers/peers_handler_test.go` around lines 231 - 233, The tests populate Peer.IPv6 but never assert or exercise the IPv6 path; update the peers_handler_test suites to assert got.Ipv6 equals the fixture (e.g., netip.MustParseAddr("fd00::1")) wherever they currently assert got.Ip, and add a new update-request subcase that sends an IPv6-only or dual-stack update payload and verifies the handler updates Peer.IPv6 (check the same response object used as got and the update handler function/endpoint exercised by the test). Ensure you reference the Peer struct fields (Peer.IPv6 and Peer.Ip / got.Ipv6 and got.Ip) and the existing test update case so the new IPv6 case mirrors its structure and assertions.management/server/group_ipv6_test.go (1)
30-32: Prefer a deterministic seed in this test.Using
time.Now().UnixNano()makes failures harder to reproduce, and this test doesn't need entropy because the store is isolated.Possible cleanup
- account.Network.NetV6 = types.AllocateIPv6Subnet(rand.New(rand.NewSource(time.Now().UnixNano()))) + account.Network.NetV6 = types.AllocateIPv6Subnet(rand.New(rand.NewSource(1)))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/group_ipv6_test.go` around lines 30 - 32, The test uses a non-deterministic seed via rand.NewSource(time.Now().UnixNano()) when calling types.AllocateIPv6Subnet, which makes reproducing failures hard; replace the time-based seed with a fixed deterministic seed (e.g., a constant integer) when creating the rand.Source used by AllocateIPv6Subnet so account.Network.NetV6 is allocated deterministically before calling am.Store.SaveAccount(ctx, account) — update the rand.New(rand.NewSource(...)) expression accordingly.management/server/store/sql_store_test.go (1)
2655-2710: AssertPeer.IPv6round-trips in the persistence test.The fixture now writes
IPv6, but the assertions still only verifyIP. A broken column mapping/serializer for the new field would pass unnoticed.Suggested assertion
assert.Equal(t, peer.ID, storedPeer.ID) assert.Equal(t, peer.AccountID, storedPeer.AccountID) assert.Equal(t, peer.Key, storedPeer.Key) assert.Equal(t, peer.IP.String(), storedPeer.IP.String()) + assert.Equal(t, peer.IPv6.String(), storedPeer.IPv6.String()) assert.Equal(t, peer.Meta, storedPeer.Meta) assert.Equal(t, peer.Name, storedPeer.Name)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/store/sql_store_test.go` around lines 2655 - 2710, The test adds IPv6 to the Peer fixture but never asserts it round-trips; update the persistence test after AddPeerToAccount/GetPeerByID to assert that peer.IPv6 equals storedPeer.IPv6 (or their string forms) so any broken mapping/serialization for the IPv6 column is caught—modify the assertions around storedPeer (in the test using peer, storedPeer, AddPeerToAccount, GetPeerByID) to include an IPv6 equality check (or WithinDuration-style comparison if you convert to strings) matching how IP is checked.management/server/route_test.go (1)
1579-1648: These IPv6 fixture additions are not exercising the new behavior.Both tests still assert only policy selection/counts, so the added
Peer.IPv6andPeerSystemMeta.Capabilitiesvalues never affect the outcome. As written, this file still won’t fail if the IPv6 route/firewall rule generation regresses. Add an assertion path that actually builds and inspects the dual-stack firewall output.Also applies to: 2169-2276
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/route_test.go` around lines 1579 - 1648, The new IPv6 fields (Peer.IPv6 and PeerSystemMeta.Capabilities) are added to fixtures but not asserted against, so update the tests in route_test.go to actually invoke the route/firewall generation path and assert on the produced dual-stack firewall rules (inspect the generated firewall/routes output for IPv6 entries and capability-driven rules) rather than only checking policy selection/counts; specifically, locate the test cases that use the modified peer fixtures and add assertions that the generated firewall output includes expected IPv6 rules for peers with Peer.IPv6 set and includes/omits rules based on PeerSystemMeta.Capabilities so regressions in IPv6 rule generation will fail.management/server/account.go (1)
2376-2387: Consider early return when IPv6 is disabled to avoid unnecessary allocation.When
settings.IPv6EnabledGroupsis empty (IPv6 disabled), this function still allocates a new subnet if one doesn't exist. While not incorrect, this is wasteful since no addresses will be assigned. You could check the enabled groups in the caller (updatePeerIPv6Addresses) before invoking subnet allocation.Optional optimization
Add an early check in
updatePeerIPv6Addressesbefore callingensureIPv6Subnet:func (am *DefaultAccountManager) updatePeerIPv6Addresses(ctx context.Context, transaction store.Store, accountID string, settings *types.Settings) error { + // If IPv6 is disabled, just clear all peer IPv6 addresses without allocating a subnet + if len(settings.IPv6EnabledGroups) == 0 && !settings.NetworkRangeV6.IsValid() { + return am.clearAllPeerIPv6Addresses(ctx, transaction, accountID) + } + peers, err := transaction.GetAccountPeers(ctx, store.LockingStrengthUpdate, accountID, "", "")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@management/server/account.go` around lines 2376 - 2387, The function currently allocates an IPv6 subnet even when IPv6 is disabled; modify the caller updatePeerIPv6Addresses to check settings.IPv6EnabledGroups (e.g., len(settings.IPv6EnabledGroups) == 0) and return/skip calling ensureIPv6Subnet when there are no enabled groups, so no subnet is allocated unnecessarily; updatePeerIPv6Addresses is the unique symbol to change so ensureIPv6Subnet is only invoked when IPv6 is enabled.client/firewall/uspfilter/filter_bench_test.go (1)
1025-1027: Add an IPv6 packet scenario so the new clamp value is actually exercised.All benchmark traffic in this file still appears to be IPv4, so
mssClampValueIPv6is configured but never used. That leaves the new IPv6 clamping path without perf coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/firewall/uspfilter/filter_bench_test.go` around lines 1025 - 1027, The benchmarks enable MSS clamping (manager.mssClampEnabled) and set manager.mssClampValueIPv6 but never send IPv6 traffic; add at least one benchmark scenario that constructs and processes an IPv6 packet (e.g., create a synthetic IPv6 TCP SYN packet with a large MSS option) and feed it through the same benchmark path used by the IPv4 tests so the code paths in functions that read manager.mssClampValueIPv6 are exercised; reuse the existing benchmark helper(s) used for IPv4 packets in filter_bench_test.go and parameterize them to build an IPv6 packet variant so the IPv6 clamp value is actually applied and measured.client/firewall/uspfilter/nat_test.go (1)
89-100: The new IPv6 decoder path still isn't covered here.
parsePacketis now dual-stack, but every caller in this file still feedsgenerateDNATTestPacket, which builds IPv4 packets only. Add at least one IPv6 DNAT case soparser6/decodePacketgets real test coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/firewall/uspfilter/nat_test.go` around lines 89 - 100, Tests only exercise the IPv4 parsing path; add a DNAT test case that builds and feeds an IPv6 packet so parser6/decodePacket get exercised. Update or add a helper (e.g., generateDNATTestPacketV6 or extend generateDNATTestPacket to accept an IP version) to construct an IPv6 DNAT packet, then add a test case that calls d.decodePacket with that IPv6 packet and asserts the same DNAT expectations as the IPv4 tests; reference the decoder fields (d.parser6) and the decode function (decodePacket) when locating where to add the case.client/firewall/uspfilter/filter_test.go (2)
533-550: Dual-stack parser initialization is duplicated.The decoder pool initialization with
parser4andparser6is repeated inTestProcessOutgoingHooksandTestStatefulFirewall_UDPTracking. Consider extracting this to a test helper function to reduce duplication.♻️ Suggested helper extraction
func newTestDecoderPool() sync.Pool { return sync.Pool{ New: func() any { d := &decoder{decoded: []gopacket.LayerType{}} d.parser4 = gopacket.NewDecodingLayerParser( layers.LayerTypeIPv4, &d.eth, &d.ip4, &d.ip6, &d.icmp4, &d.icmp6, &d.tcp, &d.udp, ) d.parser4.IgnoreUnsupported = true d.parser6 = gopacket.NewDecodingLayerParser( layers.LayerTypeIPv6, &d.eth, &d.ip4, &d.ip6, &d.icmp4, &d.icmp6, &d.tcp, &d.udp, ) d.parser6.IgnoreUnsupported = true return d }, } }Also applies to: 641-658
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/firewall/uspfilter/filter_test.go` around lines 533 - 550, Duplicate dual-stack decoder initialization (manager.decoders using decoder with parser4 and parser6) appears in multiple tests; extract it into a shared test helper (e.g., newTestDecoderPool) that returns a sync.Pool configured with the same New function creating &decoder{decoded: []gopacket.LayerType{}} and initializing d.parser4 and d.parser6 with IgnoreUnsupported=true, then replace the inline manager.decoders assignments in TestProcessOutgoingHooks and TestStatefulFirewall_UDPTracking to call the new helper.
1347-1383: IPv6 test cases run before the original IPv4 tests after manager recreation.The v6Cases loop (lines 1375-1383) runs after recreating the manager with IPv6 support (line 1361), but the original
testsloop starting at line 1385 also uses this IPv6-enabled manager. This is fine since the original tests usewgIP(IPv4) destinations, but note thatmanager.localForwardingandmanager.netstackare modified by v6Cases and not reset before the original tests run.Consider resetting manager state or running v6Cases after the original tests to avoid any potential interference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/firewall/uspfilter/filter_test.go` around lines 1347 - 1383, The v6Cases loop mutates manager.localForwarding and manager.netstack and runs before the existing tests loop, risking state leakage; either move the v6Cases block to after the original tests loop, or explicitly reset manager.localForwarding and manager.netstack to their original values (the same defaults used by the original tests) immediately after the v6Cases loop so the subsequent tests that use wgIP/wgIPv6 see a clean manager state; reference the manager object and the v6Cases loop when applying the fix.client/firewall/nftables/external_chain_monitor_integration_linux_test.go (1)
31-32: Sleep-based synchronization may cause test flakiness.The 200ms sleep to wait for netlink subscription registration could be insufficient on slow CI environments or excessive on fast machines. Consider using a more deterministic synchronization mechanism if flakiness occurs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/firewall/nftables/external_chain_monitor_integration_linux_test.go` around lines 31 - 32, Replace the brittle time.Sleep(200 * time.Millisecond) used to "Give the netlink subscription a moment to register" with a deterministic synchronization: update the test to wait for the netlink subscription readiness explicitly (for example by exposing/using a ready channel from the subscription API, returning an error from the subscription initializer, or polling with a short interval and timeout until the subscription is observed/registered) instead of sleeping; locate the sleep call (time.Sleep(200 * time.Millisecond)) and change the surrounding test to block on the subscription's ready signal or a poll loop with context timeout so the test becomes robust across slow CI and fast machines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@client/firewall/nftables/manager_linux_test.go`:
- Around line 504-511: The helper verifyIp6tablesOutput currently only checks
stdout for ip6tables-save incompatibility messages; update it (function
verifyIp6tablesOutput) to also inspect stderr for the same three messages
("Table `nat' is incompatible", "Table `mangle' is incompatible", "Table
`filter' is incompatible") and fail the test if any appear, mirroring the IPv4
helper behavior so runIp6tablesSave's captured stderr is validated as well.
In `@client/firewall/nftables/manager_linux.go`:
- Around line 356-367: The v4 NAT rule is added before attempting the v6 mirror,
leaving a half-configured state if m.router6.AddNatRule(v6Pair) fails; fix by
making the operation atomic: after calling m.router.AddNatRule(pair), if
m.hasIPv6() && pair.Dynamic then build v6Pair := firewall.ToV6NatPair(pair) and
attempt m.router6.AddNatRule(v6Pair); if that call fails, immediately call the
corresponding removal on the v4 router to rollback (e.g.,
m.router.RemoveNatRule(pair) or the project's equivalent), and return an error
that wraps both the original v6 error and any rollback/remove error (or log the
rollback error while returning the v6 error) so the manager does not remain
half-configured.
In `@client/firewall/nftables/router_linux.go`:
- Around line 874-879: The IPv6 MSS calculation can underflow because r.mtu is a
uint16 and subtracting overhead (now 60) can wrap; update the logic around
r.mtu, overhead, and mss so you first compare r.mtu to overhead (use r.mtu <=
overhead) and in that case disable or skip the MSS clamp (e.g., set mss to 0 or
return early), otherwise compute mss as the safe subtraction (r.mtu - overhead)
after confirming no underflow; apply this change in the block that references
r.af.tableFamily, ipv4TCPHeaderSize, ipv6TCPHeaderSize, overhead, and mss so the
IPv6 path no longer produces a wrapped huge value.
In `@client/firewall/uspfilter/conntrack/icmp.go`:
- Around line 25-28: The change widened MaxICMPPayloadLength to 48 but also
(incorrectly) raised the decode gate in ICMPInfo.String(), so IPv4 ICMP errors
with the normal 28-byte embedded packet no longer render; introduce a new
family-specific parse minimum constant (e.g. IPv4ParseMin = 28) and use that
minimum when deciding whether to decode the embedded original packet in
ICMPInfo.String()/the parse path (check the packet family via the existing
family field or enum), leaving MaxICMPPayloadLength as the storage maximum for
IPv6; update any conditionals that currently compare lengths against
MaxICMPPayloadLength to instead use the appropriate family-specific min.
- Around line 284-287: IsValidInbound was extended to accept ICMPv6 echo replies
but the outbound/request tracking still only keys on ICMPv4 echo requests, so
ICMPv6 replies never find a matching entry in t.connections; update the
outbound/request tracking logic (the code that inserts into t.connections—e.g.,
the method that checks for layers.ICMPv4TypeEchoRequest) to also treat
layers.ICMPv6TypeEchoRequest as a request key when the packet family is IPv6,
and ensure lookup/insert uses a family-aware key (or checks both v4/v6 request
types) so that ICMPv6 requests are stored and their corresponding type-129
replies can be matched by IsValidInbound.
In `@client/firewall/uspfilter/filter.go`:
- Around line 275-279: Precomputing m.mssClampValueIPv4/IPv6 currently subtracts
header sizes using unsigned types which underflows for small MTUs; change the
assignment in the block guarded by disableMSSClamping to compute the difference
in a signed/extended type and set the clamp value to 0 if mtu <= corresponding
header constant (use the names ipv4TCPHeaderMinSize and ipv6TCPHeaderMinSize and
set m.mssClampValueIPv4 and m.mssClampValueIPv6 to 0 on underflow), keep setting
m.mssClampEnabled as before, and update clampTCPMSS() to treat a precomputed
clamp value of 0 as “no clamp” (skip clamping when m.mssClampValueIPv4 or
m.mssClampValueIPv6 equals 0).
- Around line 1312-1320: The inbound ICMPv6 conntrack check is using the raw
ICMPv6 type from d.icmp6.TypeCode.Type() instead of the normalized type used
when state was stored by icmpv6EchoFields()/trackOutbound()/trackInbound; update
the call in the LayerTypeICMPv6 branch so it passes the normalized type returned
by icmpv6EchoFields(d) (use the normalized/type value from that helper instead
of d.icmp6.TypeCode.Type()) when calling m.icmpTracker.IsValidInbound, ensuring
the same type normalization is used for both storage and validation.
In `@client/firewall/uspfilter/nat_bench_test.go`:
- Around line 343-355: BenchmarkDNATMemoryAllocations currently constructs two
gopacket.DecodingLayerParser instances (via gopacket.NewDecodingLayerParser
assigned to decoder.parser4 and decoder.parser6) on every iteration which
pollutes the DNAT allocation profile; move the parser construction out of the
per-iteration loop by creating a single shared decoder (the decoder struct with
parser4 and parser6) before the benchmark loop and reuse it each iteration,
resetting any per-packet state (e.g., decoder.decoded = nil or call a reset
method) before calling d.decodePacket(testPacket) so only DNAT-related
allocations are measured.
In `@client/iface/wgproxy/bind/proxy.go`:
- Around line 203-217: The code constructs a netip.AddrPort from peerAddress.IP
and peerAddress.Port but casts peerAddress.Port directly to uint16 (in the block
that creates fakeIP and netipAddr), which can wrap invalid negative or >65535
values; update the logic in the function that contains this block to validate
that peerAddress.Port is within 0..65535 before conversion, return a clear error
(e.g., "invalid port") if out of range, and only then cast to uint16 and build
netip.AddrPortFrom(fakeIP, uint16(peerAddress.Port)); keep the existing IP/Addr
unmap and fakeIP logic unchanged.
In `@client/internal/debug/debug.go`:
- Around line 1257-1262: When anonymizing IPv6 prefixes in the block using
netiputil.DecodePrefix and netiputil.EncodePrefix (the anonV6/path that sets
config.AddressV6), fail closed instead of silently skipping: if DecodePrefix or
EncodePrefix returns an error, clear config.AddressV6 (set to nil/empty) or
return the error up the call path so the original bytes cannot be emitted; apply
the same change to the equivalent SourcePrefixes handling (the block around
lines 1371-1379) so any decode/encode failure either clears those SourcePrefixes
entries or errors out rather than leaving original data intact.
In `@client/internal/routemanager/notifier/notifier_android.go`:
- Around line 117-120: GetInitialRouteRanges currently serializes only
n.initialRoutes, so the initial Android TUN setup can miss n.fakeIPRoutes;
update GetInitialRouteRanges in the Notifier to include n.fakeIPRoutes as well
(e.g., merge or append the fake-IP routes into the input to routesToStrings or
call routesToStrings on both n.initialRoutes and n.fakeIPRoutes and combine the
results) before sorting and returning; reference functions/fields: Notifier,
GetInitialRouteRanges, n.initialRoutes, n.fakeIPRoutes, and routesToStrings.
In `@client/internal/routemanager/notifier/notifier_ios.go`:
- Around line 67-69: The goroutine closure reads n.currentPrefixes at call time
causing racey/incorrect snapshots; capture the payload first into a local
variable (e.g., snapshot := strings.Join(n.currentPrefixes, ",")) and pass that
snapshot into the goroutine so listener.NetworkChangeListener.OnNetworkChanged
is invoked with the captured string instead of reading n.currentPrefixes inside
the goroutine.
In `@client/internal/routemanager/server/server.go`:
- Around line 41-42: Before overwriting r.useNewDNSRoute, capture the previous
mode (e.g., prevUseNew := r.useNewDNSRoute) and use that captured value when
building delete-pairs for removals and cleanup instead of the new useNewDNSRoute
value; specifically, keep the assignment r.useNewDNSRoute = useNewDNSRoute but
change the removal/cleanup logic that builds delete pairs (the code paths
referenced at the removal and cleanup locations) to accept or read prevUseNew so
deletions are constructed with the original mode that was used to insert the
existing NAT rules.
In `@client/internal/routemanager/systemops/systemops.go`:
- Around line 116-119: The isOwnAddress method on SysOps dereferences
r.wgInterface without a nil guard which can panic; update isOwnAddress to first
check if r.wgInterface is nil and return false if so, then call
r.wgInterface.Address() and perform the existing checks (use
wgAddr.Network.Contains(addr) and wgAddr.IPv6Net.IsValid()/Contains(addr));
reference SysOps.isOwnAddress and the wgInterface.Address() call when making
this change.
In `@client/server/network.go`:
- Around line 73-77: The code assumes the companion v6 route slice has at least
one element before indexing; change the merge branch to verify the companion
slice is non-empty (e.g., if _, ok := v6ExitMerged[v6ID]; ok &&
len(routesMap[v6ID]) > 0) before reading routesMap[v6ID][0].Network and
assigning r.extraNetworks; reference v6ID, v6ExitMerged, routesMap and
r.extraNetworks in the fix.
In `@client/server/trace.go`:
- Around line 118-133: selfAddr may return a zero netip.Addr for the wrong IP
family; change the flow to fail fast by making selfAddr report that (either by
returning an error or by having the caller validate the returned address with
addr.IsValid()), and update the TracePacket caller to detect a zero/invalid
address and return a clear configuration error instead of building a packet with
an invalid endpoint. Specifically, ensure the unique symbols selfAddr and the
TracePacket (or the function that sets srcAddr/dstAddr) perform a validity check
on the address returned for the requested family and propagate a descriptive
error when no address exists for that family.
In `@management/server/group.go`:
- Around line 314-338: The issue is that prepareGroupEvents is called after
transaction.UpdateGroup so it reloads the already-updated group and emits no
diffs; change updateSingleGroup so you call prepareGroupEvents(ctx, transaction,
accountID, userID, newGroup) and capture events before calling
transaction.UpdateGroup (and before
reconcileIPv6ForGroupChanges/IncrementNetworkSerial), ensuring validateNewGroup
still runs first and newGroup.AccountID is set prior to preparing events so
diffs are computed against the pre-update state.
In `@management/server/http/handlers/dns/nameservers_handler.go`:
- Line 207: The current call that constructs the nameserver URL uses
strings.Trim(apiNS.Ip, "[]") which can silently accept malformed bracketed
addresses; before trimming, validate bracket pairing on apiNS.Ip (if it contains
'[' or ']' ensure it starts with '[' and ends with ']' and that both are
present) and return an error for malformed input; only then call strings.Trim
and proceed to net.JoinHostPort and nbdns.ParseNameServerURL so
ParseNameServerURL never receives a silently normalized, invalid host.
In `@management/server/http/testing/testing_tools/tools.go`:
- Line 136: The current use of netip.MustParseAddr(fmt.Sprintf("100.64.%d.%d",
i/256, i%256)) can panic for large i; replace string parsing with a safe
construction using netip.AddrFrom4 and bounded octets: compute the two lower
octets as byte((i/256)&0xFF) and byte(i&0xFF) (or use %256), then call
netip.AddrFrom4([4]byte{100, 64, octet1, octet2}) to create the IP without
parsing; update the code in the peer-generation function (the block that sets IP
in tools.go) to use this approach and remove MustParseAddr usage.
In `@management/server/peer.go`:
- Around line 764-770: The AddPeer IPv6 allocation block currently aborts
registration if am.Store.GetGroupByName(ctx, ..., "All") returns an error;
change this so the lookup is soft-failed: in the block where you call
am.Store.GetGroupByName (within the IPv6 allocation logic that checks
len(settings.IPv6EnabledGroups)>0 && network.NetV6.IP!=nil and
peer.ProxyMeta.Embedded), do not return on error — instead log/debug the error,
leave allGroupID empty (or nil) and continue so newPeer.IPv6 remains unset;
ensure subsequent code handles an empty allGroupID the same way it would if the
group simply didn’t exist so registration proceeds and a later sync can recover
IPv6 allocation.
---
Outside diff comments:
In `@client/firewall/iptables/router_linux.go`:
- Around line 1040-1057: The OUTPUT DNAT rule in AddOutputDNAT currently
lowercases string(protocol), which fails for family-specific protocol names
(e.g., icmp vs icmpv6); update AddOutputDNAT to derive the IP family from
localAddr and call protoForFamily(family, protocol) when building dnatRule
instead of strings.ToLower(string(protocol)), ensuring the correct protocol
token (like "icmpv6") is used; keep the rest of the rule construction and ruleID
logic unchanged.
In `@client/firewall/nftables/manager_linux.go`:
- Around line 178-186: initFirewall replaces the work table via
createWorkTable() but doesn't rollback when m.router.init(workTable) fails,
leaving partial nftables state; update Manager.initFirewall to call
m.rollbackInit() (or equivalent rollback method) if m.router.init(...) returns
an error, and return a combined error that includes both the original router
init error and any rollback error (so call rollback even on router init failure
and surface rollback failures alongside the original error).
In `@client/ui/network.go`:
- Around line 195-203: getExitNodeNetworks currently uses strings.Contains to
detect default routes while another codepath uses exact equality, causing
inconsistent behavior; add a single canonical helper function (e.g.,
isDefaultRoute(routeRange string) bool) that returns true only for the exact
strings "0.0.0.0/0" or "::/0", replace the strings.Contains check in
getExitNodeNetworks with a call to isDefaultRoute, and replace the other
exact-match check (the code that currently compares to "0.0.0.0/0" or "::/0") to
also call isDefaultRoute so both the Networks tab and tray menu use the same
logic.
---
Duplicate comments:
In `@client/internal/peer/status_test.go`:
- Line 89: The test currently ignores the error returned by status.AddPeer(key,
"abc.netbird", ip, ""), which can mask setup failures; change the call to check
the returned error and fail the test on error (e.g., err := status.AddPeer(...);
if err != nil { t.Fatalf("AddPeer failed: %v", err) }) so setup failures are
surfaced; ensure you reference the existing status.AddPeer invocation and use
the test's t to fail.
---
Nitpick comments:
In `@client/firewall/nftables/external_chain_monitor_integration_linux_test.go`:
- Around line 31-32: Replace the brittle time.Sleep(200 * time.Millisecond) used
to "Give the netlink subscription a moment to register" with a deterministic
synchronization: update the test to wait for the netlink subscription readiness
explicitly (for example by exposing/using a ready channel from the subscription
API, returning an error from the subscription initializer, or polling with a
short interval and timeout until the subscription is observed/registered)
instead of sleeping; locate the sleep call (time.Sleep(200 * time.Millisecond))
and change the surrounding test to block on the subscription's ready signal or a
poll loop with context timeout so the test becomes robust across slow CI and
fast machines.
In `@client/firewall/uspfilter/filter_bench_test.go`:
- Around line 1025-1027: The benchmarks enable MSS clamping
(manager.mssClampEnabled) and set manager.mssClampValueIPv6 but never send IPv6
traffic; add at least one benchmark scenario that constructs and processes an
IPv6 packet (e.g., create a synthetic IPv6 TCP SYN packet with a large MSS
option) and feed it through the same benchmark path used by the IPv4 tests so
the code paths in functions that read manager.mssClampValueIPv6 are exercised;
reuse the existing benchmark helper(s) used for IPv4 packets in
filter_bench_test.go and parameterize them to build an IPv6 packet variant so
the IPv6 clamp value is actually applied and measured.
In `@client/firewall/uspfilter/filter_test.go`:
- Around line 533-550: Duplicate dual-stack decoder initialization
(manager.decoders using decoder with parser4 and parser6) appears in multiple
tests; extract it into a shared test helper (e.g., newTestDecoderPool) that
returns a sync.Pool configured with the same New function creating
&decoder{decoded: []gopacket.LayerType{}} and initializing d.parser4 and
d.parser6 with IgnoreUnsupported=true, then replace the inline manager.decoders
assignments in TestProcessOutgoingHooks and TestStatefulFirewall_UDPTracking to
call the new helper.
- Around line 1347-1383: The v6Cases loop mutates manager.localForwarding and
manager.netstack and runs before the existing tests loop, risking state leakage;
either move the v6Cases block to after the original tests loop, or explicitly
reset manager.localForwarding and manager.netstack to their original values (the
same defaults used by the original tests) immediately after the v6Cases loop so
the subsequent tests that use wgIP/wgIPv6 see a clean manager state; reference
the manager object and the v6Cases loop when applying the fix.
In `@client/firewall/uspfilter/nat_test.go`:
- Around line 89-100: Tests only exercise the IPv4 parsing path; add a DNAT test
case that builds and feeds an IPv6 packet so parser6/decodePacket get exercised.
Update or add a helper (e.g., generateDNATTestPacketV6 or extend
generateDNATTestPacket to accept an IP version) to construct an IPv6 DNAT
packet, then add a test case that calls d.decodePacket with that IPv6 packet and
asserts the same DNAT expectations as the IPv4 tests; reference the decoder
fields (d.parser6) and the decode function (decodePacket) when locating where to
add the case.
In `@client/internal/lazyconn/activity/listener_bind.go`:
- Around line 68-99: deriveFakeIP currently duplicates the "last two bytes →
127.2.x.x" logic found in client/iface/wgproxy/bind/proxy.go's fakeAddress;
centralize this by extracting a shared helper (e.g.,
BuildLoopbackFromLastTwoBytes or To127TwoByteLoopback) that takes a byte slice
or netip.Addr and returns the netip.Addr loopback, then replace the logic in
deriveFakeIP (the peerIP.As4()/raw[14..15] branches) and in fakeAddress to call
that helper; ensure the helper handles IPv4/IPv6 inputs, validates lengths, and
returns an error or zero addr on invalid input so existing callers' error
handling (deriveFakeIP's final fmt.Errorf path) remains correct.
In `@management/server/account.go`:
- Around line 2376-2387: The function currently allocates an IPv6 subnet even
when IPv6 is disabled; modify the caller updatePeerIPv6Addresses to check
settings.IPv6EnabledGroups (e.g., len(settings.IPv6EnabledGroups) == 0) and
return/skip calling ensureIPv6Subnet when there are no enabled groups, so no
subnet is allocated unnecessarily; updatePeerIPv6Addresses is the unique symbol
to change so ensureIPv6Subnet is only invoked when IPv6 is enabled.
In `@management/server/group_ipv6_test.go`:
- Around line 30-32: The test uses a non-deterministic seed via
rand.NewSource(time.Now().UnixNano()) when calling types.AllocateIPv6Subnet,
which makes reproducing failures hard; replace the time-based seed with a fixed
deterministic seed (e.g., a constant integer) when creating the rand.Source used
by AllocateIPv6Subnet so account.Network.NetV6 is allocated deterministically
before calling am.Store.SaveAccount(ctx, account) — update the
rand.New(rand.NewSource(...)) expression accordingly.
In `@management/server/http/handlers/dns/nameservers_handler_test.go`:
- Around line 237-269: Extend the TestToServerNSList_IPv6 table to include a
bracketed IPv6 success case and malformed-bracket failure cases: add an input
with Ip set to "[2001:4860:4860::8888]" and expect the same
netip.MustParseAddr("2001:4860:4860::8888") and Port 53 (call to toServerNSList
should succeed), and add inputs with unmatched brackets such as
"[2001:4860:4860::8888" and "2001:4860:4860::8888]" where calling toServerNSList
should return an error (use assert.Error/NoError accordingly); update the
subtest assertions to check for error vs success instead of always asserting no
error, referencing the TestToServerNSList_IPv6 test and the toServerNSList
function.
In `@management/server/http/handlers/peers/peers_handler_test.go`:
- Around line 231-233: The tests populate Peer.IPv6 but never assert or exercise
the IPv6 path; update the peers_handler_test suites to assert got.Ipv6 equals
the fixture (e.g., netip.MustParseAddr("fd00::1")) wherever they currently
assert got.Ip, and add a new update-request subcase that sends an IPv6-only or
dual-stack update payload and verifies the handler updates Peer.IPv6 (check the
same response object used as got and the update handler function/endpoint
exercised by the test). Ensure you reference the Peer struct fields (Peer.IPv6
and Peer.Ip / got.Ipv6 and got.Ip) and the existing test update case so the new
IPv6 case mirrors its structure and assertions.
In `@management/server/route_test.go`:
- Around line 1579-1648: The new IPv6 fields (Peer.IPv6 and
PeerSystemMeta.Capabilities) are added to fixtures but not asserted against, so
update the tests in route_test.go to actually invoke the route/firewall
generation path and assert on the produced dual-stack firewall rules (inspect
the generated firewall/routes output for IPv6 entries and capability-driven
rules) rather than only checking policy selection/counts; specifically, locate
the test cases that use the modified peer fixtures and add assertions that the
generated firewall output includes expected IPv6 rules for peers with Peer.IPv6
set and includes/omits rules based on PeerSystemMeta.Capabilities so regressions
in IPv6 rule generation will fail.
In `@management/server/store/sql_store_get_account_test.go`:
- Around line 151-153: The test seeds IPv6 addresses for peers but only asserts
IPv4 values in the GetAccount-related tests; update the assertions in
sql_store_get_account_test.go (the GetAccount tests) to also assert the IPv6
field for each peer entry. For each existing IPv4 assertion (e.g., comparisons
against netip.MustParseAddr("100.64.0.1") etc.) add a matching IPv6 assertion
comparing the record's IPv6 value to the expected netip.MustParseAddr("fd00::1")
(and the corresponding IPv6 addresses used in other fixtures at the referenced
locations), so GetAccount regressions on v6 mappings are caught.
In `@management/server/store/sql_store_test.go`:
- Around line 2655-2710: The test adds IPv6 to the Peer fixture but never
asserts it round-trips; update the persistence test after
AddPeerToAccount/GetPeerByID to assert that peer.IPv6 equals storedPeer.IPv6 (or
their string forms) so any broken mapping/serialization for the IPv6 column is
caught—modify the assertions around storedPeer (in the test using peer,
storedPeer, AddPeerToAccount, GetPeerByID) to include an IPv6 equality check (or
WithinDuration-style comparison if you convert to strings) matching how IP is
checked.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 27a48453-a2c5-4d8c-963c-f00488df3dfb
⛔ Files ignored due to path filters (1)
client/proto/daemon.pb.gois excluded by!**/*.pb.go
📒 Files selected for processing (145)
.gitignoreclient/android/client.goclient/android/route_command.goclient/anonymize/anonymize.goclient/anonymize/anonymize_test.goclient/cmd/ssh.goclient/cmd/ssh_test.goclient/firewall/iptables/acl_linux.goclient/firewall/iptables/manager_linux.goclient/firewall/iptables/router_linux.goclient/firewall/iptables/rule.goclient/firewall/iptables/state_linux.goclient/firewall/manager/firewall.goclient/firewall/manager/routerpair.goclient/firewall/nftables/acl_linux.goclient/firewall/nftables/addr_family_linux.goclient/firewall/nftables/external_chain_monitor_integration_linux_test.goclient/firewall/nftables/external_chain_monitor_linux.goclient/firewall/nftables/external_chain_monitor_linux_test.goclient/firewall/nftables/manager_linux.goclient/firewall/nftables/manager_linux_test.goclient/firewall/nftables/router_linux.goclient/firewall/nftables/router_linux_test.goclient/firewall/uspfilter/allow_netbird_windows.goclient/firewall/uspfilter/conntrack/common.goclient/firewall/uspfilter/conntrack/common_test.goclient/firewall/uspfilter/conntrack/icmp.goclient/firewall/uspfilter/conntrack/icmp_test.goclient/firewall/uspfilter/filter.goclient/firewall/uspfilter/filter_bench_test.goclient/firewall/uspfilter/filter_filter_test.goclient/firewall/uspfilter/filter_test.goclient/firewall/uspfilter/forwarder/endpoint.goclient/firewall/uspfilter/forwarder/forwarder.goclient/firewall/uspfilter/forwarder/icmp.goclient/firewall/uspfilter/forwarder/tcp.goclient/firewall/uspfilter/forwarder/udp.goclient/firewall/uspfilter/hooks_filter.goclient/firewall/uspfilter/localip.goclient/firewall/uspfilter/localip_bench_test.goclient/firewall/uspfilter/localip_test.goclient/firewall/uspfilter/nat.goclient/firewall/uspfilter/nat_bench_test.goclient/firewall/uspfilter/nat_test.goclient/firewall/uspfilter/tracer.goclient/iface/configurer/usp.goclient/iface/device/adapter.goclient/iface/device/device_android.goclient/iface/device/device_windows.goclient/iface/iface.goclient/iface/wgproxy/bind/proxy.goclient/internal/acl/manager.goclient/internal/connect.goclient/internal/debug/debug.goclient/internal/debug/debug_test.goclient/internal/dns/network_manager_unix.goclient/internal/dns/service.goclient/internal/dns/service_listener.goclient/internal/dns/upstream.goclient/internal/dns/upstream_android.goclient/internal/dns/upstream_general.goclient/internal/dns/upstream_ios.goclient/internal/dnsfwd/manager.goclient/internal/ebpf/ebpf/dns_fwd_linux.goclient/internal/ebpf/manager/manager.goclient/internal/engine.goclient/internal/engine_ssh.goclient/internal/engine_test.goclient/internal/lazyconn/activity/listener_bind.goclient/internal/netflow/conntrack/conntrack.goclient/internal/peer/status.goclient/internal/peer/status_test.goclient/internal/profilemanager/config.goclient/internal/relay/relay.goclient/internal/rosenpass/manager.goclient/internal/rosenpass/manager_test.goclient/internal/routemanager/client/client.goclient/internal/routemanager/dnsinterceptor/handler.goclient/internal/routemanager/dynamic/route.goclient/internal/routemanager/dynamic/route_ios.goclient/internal/routemanager/fakeip/fakeip.goclient/internal/routemanager/fakeip/fakeip_test.goclient/internal/routemanager/ipfwdstate/ipfwdstate.goclient/internal/routemanager/manager.goclient/internal/routemanager/notifier/notifier_android.goclient/internal/routemanager/notifier/notifier_ios.goclient/internal/routemanager/notifier/notifier_other.goclient/internal/routemanager/server/server.goclient/internal/routemanager/systemops/systemops.goclient/internal/routemanager/systemops/systemops_generic.goclient/internal/routemanager/systemops/systemops_linux.goclient/ios/NetBirdSDK/client.goclient/proto/daemon.protoclient/server/network.goclient/server/server.goclient/server/setconfig_test.goclient/server/trace.goclient/ssh/config/manager.goclient/ssh/config/manager_test.goclient/ssh/proxy/proxy.goclient/ssh/server/port_forwarding.goclient/ssh/server/server.goclient/system/info.goclient/ui/client_ui.goclient/ui/event/event.goclient/ui/network.goclient/wasm/cmd/main.goclient/wasm/internal/ssh/client.gocombined/cmd/config.gomanagement/internals/modules/reverseproxy/service/manager/l4_port_test.gomanagement/internals/modules/reverseproxy/service/manager/manager.gomanagement/internals/modules/reverseproxy/service/manager/manager_test.gomanagement/internals/shared/grpc/conversion.gomanagement/internals/shared/grpc/server.gomanagement/server/account.gomanagement/server/account/manager.gomanagement/server/account/manager_mock.gomanagement/server/account_test.gomanagement/server/activity/codes.gomanagement/server/group.gomanagement/server/group_ipv6_test.gomanagement/server/group_test.gomanagement/server/http/handlers/accounts/accounts_handler.gomanagement/server/http/handlers/accounts/accounts_handler_test.gomanagement/server/http/handlers/dns/nameservers_handler.gomanagement/server/http/handlers/dns/nameservers_handler_test.gomanagement/server/http/handlers/groups/groups_handler_test.gomanagement/server/http/handlers/peers/peers_handler.gomanagement/server/http/handlers/peers/peers_handler_test.gomanagement/server/http/testing/testing_tools/tools.gomanagement/server/mock_server/account_mock.gomanagement/server/peer.gomanagement/server/peer/peer.gomanagement/server/peer/peer_test.gomanagement/server/peer_test.gomanagement/server/policy_test.gomanagement/server/route_test.gomanagement/server/settings/manager.gomanagement/server/settings/manager_mock.gomanagement/server/store/sql_store.gomanagement/server/store/sql_store_get_account_test.gomanagement/server/store/sql_store_test.gomanagement/server/store/sqlstore_bench_test.gomanagement/server/store/store.gomanagement/server/store/store_mock.go
💤 Files with no reviewable changes (1)
- client/firewall/uspfilter/hooks_filter.go
✅ Files skipped from review due to trivial changes (14)
- .gitignore
- client/firewall/iptables/rule.go
- client/internal/dnsfwd/manager.go
- client/cmd/ssh_test.go
- client/ssh/proxy/proxy.go
- client/internal/dns/upstream.go
- client/internal/relay/relay.go
- management/server/http/handlers/groups/groups_handler_test.go
- management/server/group_test.go
- client/internal/routemanager/ipfwdstate/ipfwdstate.go
- client/internal/dns/service.go
- management/server/activity/codes.go
- client/ssh/config/manager_test.go
- management/server/policy_test.go
🚧 Files skipped from review as they are similar to previous changes (8)
- client/proto/daemon.proto
- client/internal/dns/network_manager_unix.go
- client/wasm/internal/ssh/client.go
- client/internal/connect.go
- management/internals/shared/grpc/conversion.go
- client/wasm/cmd/main.go
- client/internal/dns/upstream_ios.go
- management/server/store/sql_store.go
| func verifyIp6tablesOutput(t *testing.T, stdout, stderr string) { | ||
| t.Helper() | ||
| require.NotContains(t, stdout, "Table `nat' is incompatible", | ||
| "ip6tables-save: nat table incompatible. Full output: %s", stdout) | ||
| require.NotContains(t, stdout, "Table `mangle' is incompatible", | ||
| "ip6tables-save: mangle table incompatible. Full output: %s", stdout) | ||
| require.NotContains(t, stdout, "Table `filter' is incompatible", | ||
| "ip6tables-save: filter table incompatible. Full output: %s", stdout) |
There was a problem hiding this comment.
Check stderr for ip6tables-save incompatibility warnings too.
runIp6tablesSave() captures stderr, but this helper only inspects stdout. If the incompatibility warning is emitted on stderr—like the IPv4 helper already assumes—the new IPv6 compatibility test will false-pass.
🛠️ Suggested fix
func verifyIp6tablesOutput(t *testing.T, stdout, stderr string) {
t.Helper()
+ require.NotContains(t, stderr, "incompatible",
+ "ip6tables-save produced compatibility warning. Full stderr: %s", stderr)
require.NotContains(t, stdout, "Table `nat' is incompatible",
"ip6tables-save: nat table incompatible. Full output: %s", stdout)
require.NotContains(t, stdout, "Table `mangle' is incompatible",
"ip6tables-save: mangle table incompatible. Full output: %s", stdout)
require.NotContains(t, stdout, "Table `filter' is incompatible",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/firewall/nftables/manager_linux_test.go` around lines 504 - 511, The
helper verifyIp6tablesOutput currently only checks stdout for ip6tables-save
incompatibility messages; update it (function verifyIp6tablesOutput) to also
inspect stderr for the same three messages ("Table `nat' is incompatible",
"Table `mangle' is incompatible", "Table `filter' is incompatible") and fail the
test if any appear, mirroring the IPv4 helper behavior so runIp6tablesSave's
captured stderr is validated as well.
| if err := m.router.AddNatRule(pair); err != nil { | ||
| return err | ||
| } | ||
|
|
||
| // Dynamic routes need NAT in both tables since resolved IPs can be | ||
| // either v4 or v6. This covers both DomainSet (modern) and the legacy | ||
| // wildcard 0.0.0.0/0 destination where the client resolves DNS. | ||
| if m.hasIPv6() && pair.Dynamic { | ||
| v6Pair := firewall.ToV6NatPair(pair) | ||
| if err := m.router6.AddNatRule(v6Pair); err != nil { | ||
| return fmt.Errorf("add v6 NAT rule: %w", err) | ||
| } |
There was a problem hiding this comment.
Mirror NAT updates atomically across v4 and v6.
For dynamic routes, the v4 NAT rules are installed before the v6 mirror. If the v6 add fails, this returns an error but keeps the v4 rules active, which leaves the manager in a half-configured state.
Suggested fix
if m.hasIPv6() && pair.Dynamic {
v6Pair := firewall.ToV6NatPair(pair)
if err := m.router6.AddNatRule(v6Pair); err != nil {
- return fmt.Errorf("add v6 NAT rule: %w", err)
+ if rollbackErr := m.router.RemoveNatRule(pair); rollbackErr != nil {
+ return fmt.Errorf("add v6 NAT rule: %w (rollback v4 NAT: %v)", err, rollbackErr)
+ }
+ return fmt.Errorf("add v6 NAT rule: %w", err)
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/firewall/nftables/manager_linux.go` around lines 356 - 367, The v4 NAT
rule is added before attempting the v6 mirror, leaving a half-configured state
if m.router6.AddNatRule(v6Pair) fails; fix by making the operation atomic: after
calling m.router.AddNatRule(pair), if m.hasIPv6() && pair.Dynamic then build
v6Pair := firewall.ToV6NatPair(pair) and attempt m.router6.AddNatRule(v6Pair);
if that call fails, immediately call the corresponding removal on the v4 router
to rollback (e.g., m.router.RemoveNatRule(pair) or the project's equivalent),
and return an error that wraps both the original v6 error and any
rollback/remove error (or log the rollback error while returning the v6 error)
so the manager does not remain half-configured.
| overhead := uint16(ipv4TCPHeaderSize) | ||
| if r.af.tableFamily == nftables.TableFamilyIPv6 { | ||
| overhead = ipv6TCPHeaderSize | ||
| } | ||
| mss := r.mtu - overhead | ||
|
|
There was a problem hiding this comment.
Guard the native IPv6 MSS clamp against MTU underflow.
r.mtu is a uint16, so any MTU smaller than the selected header overhead wraps mss to a huge value instead of disabling/failing the clamp. The new IPv6 path increases that risk because its overhead is now 60.
🛠️ Suggested fix
overhead := uint16(ipv4TCPHeaderSize)
if r.af.tableFamily == nftables.TableFamilyIPv6 {
overhead = ipv6TCPHeaderSize
}
+ if r.mtu <= overhead {
+ return fmt.Errorf("mtu %d too small for TCP MSS clamping", r.mtu)
+ }
mss := r.mtu - overhead📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| overhead := uint16(ipv4TCPHeaderSize) | |
| if r.af.tableFamily == nftables.TableFamilyIPv6 { | |
| overhead = ipv6TCPHeaderSize | |
| } | |
| mss := r.mtu - overhead | |
| overhead := uint16(ipv4TCPHeaderSize) | |
| if r.af.tableFamily == nftables.TableFamilyIPv6 { | |
| overhead = ipv6TCPHeaderSize | |
| } | |
| if r.mtu <= overhead { | |
| return fmt.Errorf("mtu %d too small for TCP MSS clamping", r.mtu) | |
| } | |
| mss := r.mtu - overhead |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/firewall/nftables/router_linux.go` around lines 874 - 879, The IPv6
MSS calculation can underflow because r.mtu is a uint16 and subtracting overhead
(now 60) can wrap; update the logic around r.mtu, overhead, and mss so you first
compare r.mtu to overhead (use r.mtu <= overhead) and in that case disable or
skip the MSS clamp (e.g., set mss to 0 or return early), otherwise compute mss
as the safe subtraction (r.mtu - overhead) after confirming no underflow; apply
this change in the block that references r.af.tableFamily, ipv4TCPHeaderSize,
ipv6TCPHeaderSize, overhead, and mss so the IPv6 path no longer produces a
wrapped huge value.
| // MaxICMPPayloadLength is the maximum length of ICMP payload we consider for original packet info. | ||
| // IPv4: 20-byte header + 8-byte transport = 28 bytes. | ||
| // IPv6: 40-byte header + 8-byte transport = 48 bytes. | ||
| MaxICMPPayloadLength = 48 |
There was a problem hiding this comment.
Split the IPv4 parse minimum from the IPv6 storage maximum.
Raising MaxICMPPayloadLength to 48 also raises the ICMPInfo.String() gate that decides whether to decode the embedded original packet. Normal IPv4 ICMP errors usually carry 28 bytes, so after this change their original TCP/UDP tuple stops being rendered in logs. Use a family-specific minimum parse length instead of reusing the max storage constant.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/firewall/uspfilter/conntrack/icmp.go` around lines 25 - 28, The change
widened MaxICMPPayloadLength to 48 but also (incorrectly) raised the decode gate
in ICMPInfo.String(), so IPv4 ICMP errors with the normal 28-byte embedded
packet no longer render; introduce a new family-specific parse minimum constant
(e.g. IPv4ParseMin = 28) and use that minimum when deciding whether to decode
the embedded original packet in ICMPInfo.String()/the parse path (check the
packet family via the existing family field or enum), leaving
MaxICMPPayloadLength as the storage maximum for IPv6; update any conditionals
that currently compare lengths against MaxICMPPayloadLength to instead use the
appropriate family-specific min.
| // IsValidInbound checks if an inbound ICMP Echo Reply matches a tracked request. | ||
| // Accepts both ICMPv4 (type 0) and ICMPv6 (type 129) echo replies. | ||
| func (t *ICMPTracker) IsValidInbound(srcIP netip.Addr, dstIP netip.Addr, id uint16, icmpType uint8, size int) bool { | ||
| if icmpType != uint8(layers.ICMPv4TypeEchoReply) { | ||
| if icmpType != uint8(layers.ICMPv4TypeEchoReply) && icmpType != uint8(layers.ICMPv6TypeEchoReply) { |
There was a problem hiding this comment.
ICMPv6 echo replies will still miss conntrack state.
This now accepts ICMPv6 echo replies, but outbound/request tracking is still keyed off layers.ICMPv4TypeEchoRequest only. IPv6 echo requests never enter t.connections, so every type-129 reply will fail the reverse lookup and be rejected. Please make the request-tracking check family-aware in the same change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/firewall/uspfilter/conntrack/icmp.go` around lines 284 - 287,
IsValidInbound was extended to accept ICMPv6 echo replies but the
outbound/request tracking still only keys on ICMPv4 echo requests, so ICMPv6
replies never find a matching entry in t.connections; update the
outbound/request tracking logic (the code that inserts into t.connections—e.g.,
the method that checks for layers.ICMPv4TypeEchoRequest) to also treat
layers.ICMPv6TypeEchoRequest as a request key when the packet family is IPv6,
and ensure lookup/insert uses a family-aware key (or checks both v4/v6 request
types) so that ICMPv6 requests are stored and their corresponding type-129
replies can be matched by IsValidInbound.
| if srcSelf { | ||
| srcAddr = selfAddr(engine, peer) | ||
| } | ||
| if dstSelf { | ||
| dstAddr = selfAddr(engine, peer) | ||
| } | ||
|
|
||
| return srcAddr, dstAddr, nil | ||
| } | ||
|
|
||
| func selfAddr(engine *internal.Engine, peer netip.Addr) netip.Addr { | ||
| if peer.Is6() { | ||
| return engine.GetWgV6Addr() | ||
| } | ||
| return engine.GetWgAddr() | ||
| } |
There was a problem hiding this comment.
Fail fast when "self" has no address in the selected family.
selfAddr() can return the zero netip.Addr{} when IPv6 is requested on a client that only has an IPv4 overlay, or vice versa. TracePacket then builds a packet with an invalid endpoint instead of returning a clear configuration error.
Proposed fix
if srcSelf {
- srcAddr = selfAddr(engine, peer)
+ srcAddr, err = selfAddr(engine, peer)
+ if err != nil {
+ return netip.Addr{}, netip.Addr{}, err
+ }
}
if dstSelf {
- dstAddr = selfAddr(engine, peer)
+ dstAddr, err = selfAddr(engine, peer)
+ if err != nil {
+ return netip.Addr{}, netip.Addr{}, err
+ }
}
@@
-func selfAddr(engine *internal.Engine, peer netip.Addr) netip.Addr {
- if peer.Is6() {
- return engine.GetWgV6Addr()
- }
- return engine.GetWgAddr()
+func selfAddr(engine *internal.Engine, peer netip.Addr) (netip.Addr, error) {
+ addr := engine.GetWgAddr()
+ family := "IPv4"
+ if peer.Is6() {
+ addr = engine.GetWgV6Addr()
+ family = "IPv6"
+ }
+ if !addr.IsValid() {
+ return netip.Addr{}, fmt.Errorf("local %s overlay address is not configured", family)
+ }
+ return addr, nil
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/server/trace.go` around lines 118 - 133, selfAddr may return a zero
netip.Addr for the wrong IP family; change the flow to fail fast by making
selfAddr report that (either by returning an error or by having the caller
validate the returned address with addr.IsValid()), and update the TracePacket
caller to detect a zero/invalid address and return a clear configuration error
instead of building a packet with an invalid endpoint. Specifically, ensure the
unique symbols selfAddr and the TracePacket (or the function that sets
srcAddr/dstAddr) perform a validity check on the address returned for the
requested family and propagate a descriptive error when no address exists for
that family.
| func (am *DefaultAccountManager) updateSingleGroup(ctx context.Context, accountID, userID string, newGroup *types.Group) ([]func(), error) { | ||
| var events []func() | ||
| err := am.Store.ExecuteInTransaction(ctx, func(transaction store.Store) error { | ||
| if err := validateNewGroup(ctx, transaction, accountID, newGroup); err != nil { | ||
| return err | ||
| } | ||
|
|
||
| newGroup.AccountID = accountID | ||
|
|
||
| if err := transaction.UpdateGroup(ctx, newGroup); err != nil { | ||
| return err | ||
| } | ||
|
|
||
| if err := am.reconcileIPv6ForGroupChanges(ctx, transaction, accountID, []string{newGroup.ID}); err != nil { | ||
| return err | ||
| } | ||
|
|
||
| if err := transaction.IncrementNetworkSerial(ctx, accountID); err != nil { | ||
| return err | ||
| } | ||
|
|
||
| events = am.prepareGroupEvents(ctx, transaction, accountID, userID, newGroup) | ||
| return nil | ||
| }) | ||
| return events, err |
There was a problem hiding this comment.
Prepare group events before persisting the new state.
prepareGroupEvents() computes diffs by reloading the current group from the transaction. Here it runs after transaction.UpdateGroup(...), so bulk UpdateGroups() will stop emitting rename / peer-added / peer-removed activity events because oldGroup is already the updated one.
Suggested fix
func (am *DefaultAccountManager) updateSingleGroup(ctx context.Context, accountID, userID string, newGroup *types.Group) ([]func(), error) {
var events []func()
err := am.Store.ExecuteInTransaction(ctx, func(transaction store.Store) error {
if err := validateNewGroup(ctx, transaction, accountID, newGroup); err != nil {
return err
}
newGroup.AccountID = accountID
+
+ events = am.prepareGroupEvents(ctx, transaction, accountID, userID, newGroup)
if err := transaction.UpdateGroup(ctx, newGroup); err != nil {
return err
}
if err := am.reconcileIPv6ForGroupChanges(ctx, transaction, accountID, []string{newGroup.ID}); err != nil {
return err
}
if err := transaction.IncrementNetworkSerial(ctx, accountID); err != nil {
return err
}
-
- events = am.prepareGroupEvents(ctx, transaction, accountID, userID, newGroup)
return nil
})
return events, err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (am *DefaultAccountManager) updateSingleGroup(ctx context.Context, accountID, userID string, newGroup *types.Group) ([]func(), error) { | |
| var events []func() | |
| err := am.Store.ExecuteInTransaction(ctx, func(transaction store.Store) error { | |
| if err := validateNewGroup(ctx, transaction, accountID, newGroup); err != nil { | |
| return err | |
| } | |
| newGroup.AccountID = accountID | |
| if err := transaction.UpdateGroup(ctx, newGroup); err != nil { | |
| return err | |
| } | |
| if err := am.reconcileIPv6ForGroupChanges(ctx, transaction, accountID, []string{newGroup.ID}); err != nil { | |
| return err | |
| } | |
| if err := transaction.IncrementNetworkSerial(ctx, accountID); err != nil { | |
| return err | |
| } | |
| events = am.prepareGroupEvents(ctx, transaction, accountID, userID, newGroup) | |
| return nil | |
| }) | |
| return events, err | |
| func (am *DefaultAccountManager) updateSingleGroup(ctx context.Context, accountID, userID string, newGroup *types.Group) ([]func(), error) { | |
| var events []func() | |
| err := am.Store.ExecuteInTransaction(ctx, func(transaction store.Store) error { | |
| if err := validateNewGroup(ctx, transaction, accountID, newGroup); err != nil { | |
| return err | |
| } | |
| newGroup.AccountID = accountID | |
| events = am.prepareGroupEvents(ctx, transaction, accountID, userID, newGroup) | |
| if err := transaction.UpdateGroup(ctx, newGroup); err != nil { | |
| return err | |
| } | |
| if err := am.reconcileIPv6ForGroupChanges(ctx, transaction, accountID, []string{newGroup.ID}); err != nil { | |
| return err | |
| } | |
| if err := transaction.IncrementNetworkSerial(ctx, accountID); err != nil { | |
| return err | |
| } | |
| return nil | |
| }) | |
| return events, err | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@management/server/group.go` around lines 314 - 338, The issue is that
prepareGroupEvents is called after transaction.UpdateGroup so it reloads the
already-updated group and emits no diffs; change updateSingleGroup so you call
prepareGroupEvents(ctx, transaction, accountID, userID, newGroup) and capture
events before calling transaction.UpdateGroup (and before
reconcileIPv6ForGroupChanges/IncrementNetworkSerial), ensuring validateNewGroup
still runs first and newGroup.AccountID is set prior to preparing events so
diffs are computed against the pre-update state.
| var nsList []nbdns.NameServer | ||
| for _, apiNS := range apiNSList { | ||
| parsed, err := nbdns.ParseNameServerURL(fmt.Sprintf("%s://%s:%d", apiNS.NsType, apiNS.Ip, apiNS.Port)) | ||
| parsed, err := nbdns.ParseNameServerURL(fmt.Sprintf("%s://%s", apiNS.NsType, net.JoinHostPort(strings.Trim(apiNS.Ip, "[]"), strconv.Itoa(apiNS.Port)))) |
There was a problem hiding this comment.
Validate bracket pairing before trimming host delimiters.
Line 207 uses strings.Trim(apiNS.Ip, "[]"), which can silently normalize malformed inputs (e.g. 1.1.1.1], [1.1.1.1) into valid hosts. This weakens validation.
💡 Proposed fix
func toServerNSList(apiNSList []api.Nameserver) ([]nbdns.NameServer, error) {
var nsList []nbdns.NameServer
for _, apiNS := range apiNSList {
- parsed, err := nbdns.ParseNameServerURL(fmt.Sprintf("%s://%s", apiNS.NsType, net.JoinHostPort(strings.Trim(apiNS.Ip, "[]"), strconv.Itoa(apiNS.Port))))
+ host := apiNS.Ip
+ if strings.HasPrefix(host, "[") || strings.HasSuffix(host, "]") {
+ if !(strings.HasPrefix(host, "[") && strings.HasSuffix(host, "]")) {
+ return nil, fmt.Errorf("invalid nameserver IP format: %s", apiNS.Ip)
+ }
+ host = strings.TrimPrefix(strings.TrimSuffix(host, "]"), "[")
+ }
+
+ parsed, err := nbdns.ParseNameServerURL(fmt.Sprintf("%s://%s", apiNS.NsType, net.JoinHostPort(host, strconv.Itoa(apiNS.Port))))
if err != nil {
return nil, err
}
nsList = append(nsList, parsed)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@management/server/http/handlers/dns/nameservers_handler.go` at line 207, The
current call that constructs the nameserver URL uses strings.Trim(apiNS.Ip,
"[]") which can silently accept malformed bracketed addresses; before trimming,
validate bracket pairing on apiNS.Ip (if it contains '[' or ']' ensure it starts
with '[' and ends with ']' and that both are present) and return an error for
malformed input; only then call strings.Trim and proceed to net.JoinHostPort and
nbdns.ParseNameServerURL so ParseNameServerURL never receives a silently
normalized, invalid host.
| DNSLabel: fmt.Sprintf("oldpeer-%d", i), | ||
| Key: peerKey.PublicKey().String(), | ||
| IP: net.ParseIP(fmt.Sprintf("100.64.%d.%d", i/256, i%256)), | ||
| IP: netip.MustParseAddr(fmt.Sprintf("100.64.%d.%d", i/256, i%256)), |
There was a problem hiding this comment.
Potential panic in benchmark data generation for large peers values.
Line 136 can panic when peers is high enough that i/256 exceeds 255 (invalid IPv4 octet), because MustParseAddr panics on malformed input. Consider enforcing bounds and constructing the address without string parsing.
Suggested fix
func PopulateTestData(b *testing.B, am account.Manager, peers, groups, users, setupKeys int) {
b.Helper()
ctx := context.Background()
acc, err := am.GetAccount(ctx, TestAccountId)
if err != nil {
b.Fatalf("Failed to get account: %v", err)
}
+ if peers > 1<<16 {
+ b.Fatalf("peers (%d) exceeds supported test IP range (65536)", peers)
+ }
+
// Create peers
for i := 0; i < peers; i++ {
peerKey, _ := wgtypes.GeneratePrivateKey()
peer := &nbpeer.Peer{
ID: fmt.Sprintf("oldpeer-%d", i),
DNSLabel: fmt.Sprintf("oldpeer-%d", i),
Key: peerKey.PublicKey().String(),
- IP: netip.MustParseAddr(fmt.Sprintf("100.64.%d.%d", i/256, i%256)),
+ IP: netip.AddrFrom4([4]byte{100, 64, byte(i / 256), byte(i % 256)}),
Status: &nbpeer.PeerStatus{LastSeen: time.Now().UTC(), Connected: true},
UserID: TestUserId,
}
acc.Peers[peer.ID] = peer
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@management/server/http/testing/testing_tools/tools.go` at line 136, The
current use of netip.MustParseAddr(fmt.Sprintf("100.64.%d.%d", i/256, i%256))
can panic for large i; replace string parsing with a safe construction using
netip.AddrFrom4 and bounded octets: compute the two lower octets as
byte((i/256)&0xFF) and byte(i&0xFF) (or use %256), then call
netip.AddrFrom4([4]byte{100, 64, octet1, octet2}) to create the IP without
parsing; update the code in the peer-generation function (the block that sets IP
in tools.go) to use this approach and remove MustParseAddr usage.
| if len(settings.IPv6EnabledGroups) > 0 && network.NetV6.IP != nil { | ||
| var allGroupID string | ||
| if !peer.ProxyMeta.Embedded { | ||
| allGroup, err := am.Store.GetGroupByName(ctx, store.LockingStrengthNone, accountID, "All") | ||
| if err != nil { | ||
| return nil, nil, nil, fmt.Errorf("get All group: %w", err) | ||
| } |
There was a problem hiding this comment.
Don't block peer registration on All group lookup failure.
This turns optional IPv6 allocation into a hard registration failure. If the built-in All group lookup fails here, continue registering the peer without newPeer.IPv6 and let a later sync recover it.
Suggested fix
if len(settings.IPv6EnabledGroups) > 0 && network.NetV6.IP != nil {
var allGroupID string
if !peer.ProxyMeta.Embedded {
allGroup, err := am.Store.GetGroupByName(ctx, store.LockingStrengthNone, accountID, "All")
if err != nil {
- return nil, nil, nil, fmt.Errorf("get All group: %w", err)
+ log.WithContext(ctx).Warnf("failed to get All group for IPv6 allocation, continuing without IPv6: %v", err)
+ } else {
+ allGroupID = allGroup.ID
}
- allGroupID = allGroup.ID
}
if peerWillHaveIPv6(settings, peerAddConfig.GroupsToAdd, allGroupID) {
v6Prefix, err := netip.ParsePrefix(network.NetV6.String())
if err != nil {
return nil, nil, nil, fmt.Errorf("parse IPv6 prefix: %w", err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if len(settings.IPv6EnabledGroups) > 0 && network.NetV6.IP != nil { | |
| var allGroupID string | |
| if !peer.ProxyMeta.Embedded { | |
| allGroup, err := am.Store.GetGroupByName(ctx, store.LockingStrengthNone, accountID, "All") | |
| if err != nil { | |
| return nil, nil, nil, fmt.Errorf("get All group: %w", err) | |
| } | |
| if len(settings.IPv6EnabledGroups) > 0 && network.NetV6.IP != nil { | |
| var allGroupID string | |
| if !peer.ProxyMeta.Embedded { | |
| allGroup, err := am.Store.GetGroupByName(ctx, store.LockingStrengthNone, accountID, "All") | |
| if err != nil { | |
| log.WithContext(ctx).Warnf("failed to get All group for IPv6 allocation, continuing without IPv6: %v", err) | |
| } else { | |
| allGroupID = allGroup.ID | |
| } | |
| } | |
| if peerWillHaveIPv6(settings, peerAddConfig.GroupsToAdd, allGroupID) { | |
| v6Prefix, err := netip.ParsePrefix(network.NetV6.String()) | |
| if err != nil { | |
| return nil, nil, nil, fmt.Errorf("parse IPv6 prefix: %w", err) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@management/server/peer.go` around lines 764 - 770, The AddPeer IPv6
allocation block currently aborts registration if am.Store.GetGroupByName(ctx,
..., "All") returns an error; change this so the lookup is soft-failed: in the
block where you call am.Store.GetGroupByName (within the IPv6 allocation logic
that checks len(settings.IPv6EnabledGroups)>0 && network.NetV6.IP!=nil and
peer.ProxyMeta.Embedded), do not return on error — instead log/debug the error,
leave allGroupID empty (or nil) and continue so newPeer.IPv6 remains unset;
ensure subsequent code handles an empty allGroupID the same way it would if the
group simply didn’t exist so registration proceeds and a later sync can recover
IPv6 allocation.
|



Describe your changes
This PR implements IPv6 overlay support in client and management.
Issue ticket number and link
Stack
Checklist
Documentation
Select exactly one:
Docs PR URL (required if "docs added" is checked)
Paste the PR link from https://github.com/netbirdio/docs here:
netbirdio/docs#667
Summary by CodeRabbit
New Features
Improvements
Refactor
Tests