You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current tunnel executor works, but it leans on a few fragile Nx behaviors and would be more reliable if it were structured differently.
Concrete bug this causes
A nested Forge app build can resolve the wrong outputPath when run under tunnel.
A concrete example from a fictional app:
the Forge app webpack build is supposed to emit runtime code to dist/apps/example-forge-app/src/index.js
a clean package run does exactly that
but when the same build runs under example-forge-app:tunnel, it also emits dist/apps/example-forge-app/index.js at the app root
during that same tunnel flow, the src/index.js file can stop being the file that gets updated
That is not a Forge requirement issue. It appears to come from leaked Nx task context during nested execution.
The likely mechanism is:
Nx sets NX_TASK_TARGET_* env vars for the current task
nx:run-commands passes parent process.env through to child commands
NxAppWebpackPlugin reads those env vars to decide which target config to merge
when build is launched from inside tunnel, the webpack plugin can see the outer tunnel target context instead of the inner build target context
that makes it pick up tunnel.options.outputPath instead of the webpack build output path
So this can cause the runtime bundle to be written to the app root by mistake, even before the forge tunnel subprocess itself becomes relevant.
What feels brittle
The main issue is that tunnel nests several long-running targets within a single executor process in executor.ts. That creates a couple of problems:
shared process.env gets reused across nested targets, so Nx/webpack task metadata can leak from one child into another
inferred nx:run-commands targets are especially fragile, because they mostly inherit the parent environment rather than getting a clean executor context
this does not just affect the final Forge CLI step; it already affects the nested Nx build/watch phase
the spawned forge tunnel process inherits Nx variables too in async-commands.ts, which is another way unrelated Nx behavior can bleed into the Forge phase
readiness is based on file presence plus open ports, which works, but it is still coordinating several concurrent systems by side effect rather than by explicit contract
What we could improve
Redesign it around process isolation and explicit orchestration:
Start each Custom UI as a real child process with a clean env, not a nested runExecutor(...) sharing the parent process.
Start the Forge app build/watch in its own clean subprocess too.
Whitelist the env passed to child Nx processes instead of inheriting everything.
Strip Nx task vars before spawning nested build/watch steps: at minimum NX_BUILD_TARGET, NX_TASK_TARGET_*, WEBPACK_SERVE, and similar executor-specific state.
Strip the same vars before spawning forge tunnel.
Treat inferred nx:run-commands as a compatibility path, not the ideal path.
Prefer explicit executors for Custom UI apps, or at least detect inferred webpack targets and normalize them more defensively.
Separate “prepare/package” from “serve/watch” orchestration so the control flow is easier to reason about and failures are easier to attribute.
What a better model looks like
Instead of “one executor starts everything and all children inherit the same world”, tunnel could act as an orchestrator that does this:
Resolve manifest resources.
Start each UI server in its own clean subprocess.
Start the Forge app build watch in its own clean subprocess.
Run package once.
Wait for explicit readiness.
Spawn forge tunnel with a scrubbed env.
Similar patterns in the Nx codebase
There are a few Nx patterns that feel like good models for a better tunnel executor.
Best fits
The closest match is the module-federation family.
it forks a fresh nx process and explicitly overrides env, including WEBPACK_SERVE: 'false'
That feels very relevant here, because the tunnel problems are caused by leaked task state like NX_TASK_TARGET_*, NX_BUILD_TARGET, and WEBPACK_SERVE.
So if nx-forge:tunnel needs to:
start inferred serve targets
start Forge CLI
run packaging/build steps concurrently
then borrowing this pattern would help a lot:
spawn isolated child processes for the risky parts
scrub/whitelist env for each child
do not let one nested target mutate shared executor state for the whole process
What I’d copy
From Nx, we could borrow these concrete ideas:
Use a small orchestration helper like start-remote-iterators.ts for “discover related apps and start them”.
Keep combineAsyncIterables(...) plus explicit readiness checks, like ssr-dev-server.impl.ts.
For subprocesses that should not inherit Nx executor state, use a fork/spawn pattern like build-static-remotes.ts.
Reuse Nx’s process lifecycle ideas from running-tasks.ts: signal forwarding, grouped shutdown, and consistent child cleanup.
How I’d apply it to nx-forge:tunnel
We could keep the high-level flow, but change the boundaries:
Discover Custom UI projects from the manifest.
Start each Custom UI in an isolated child process, not a shared nested executor context.
Start the Forge app build/watch in its own isolated child process.
Run package once.
Wait for ports/files explicitly.
Spawn forge tunnel with a scrubbed env.
One important prioritization point: the Forge subprocess env cleanup is still a good idea, but the nested Nx build isolation is the critical fix for the concrete output-path bug above. The wrong bundle location is already happening during the nested build phase, before the forge tunnel subprocess itself matters.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The current
tunnelexecutor works, but it leans on a few fragile Nx behaviors and would be more reliable if it were structured differently.Concrete bug this causes
A nested Forge app build can resolve the wrong
outputPathwhen run undertunnel.A concrete example from a fictional app:
dist/apps/example-forge-app/src/index.jspackagerun does exactly thatexample-forge-app:tunnel, it also emitsdist/apps/example-forge-app/index.jsat the app rootsrc/index.jsfile can stop being the file that gets updatedThat is not a Forge requirement issue. It appears to come from leaked Nx task context during nested execution.
The likely mechanism is:
NX_TASK_TARGET_*env vars for the current tasknx:run-commandspasses parentprocess.envthrough to child commandsNxAppWebpackPluginreads those env vars to decide which target config to mergebuildis launched from insidetunnel, the webpack plugin can see the outertunneltarget context instead of the innerbuildtarget contexttunnel.options.outputPathinstead of the webpack build output pathSo this can cause the runtime bundle to be written to the app root by mistake, even before the
forge tunnelsubprocess itself becomes relevant.What feels brittle
The main issue is that
tunnelnests several long-running targets within a single executor process in executor.ts. That creates a couple of problems:process.envgets reused across nested targets, so Nx/webpack task metadata can leak from one child into anothernx:run-commandstargets are especially fragile, because they mostly inherit the parent environment rather than getting a clean executor contextforge tunnelprocess inherits Nx variables too in async-commands.ts, which is another way unrelated Nx behavior can bleed into the Forge phaseWhat we could improve
Redesign it around process isolation and explicit orchestration:
runExecutor(...)sharing the parent process.NX_BUILD_TARGET,NX_TASK_TARGET_*,WEBPACK_SERVE, and similar executor-specific state.forge tunnel.nx:run-commandsas a compatibility path, not the ideal path.What a better model looks like
Instead of “one executor starts everything and all children inherit the same world”,
tunnelcould act as an orchestrator that does this:forge tunnelwith a scrubbed env.Similar patterns in the Nx codebase
There are a few Nx patterns that feel like good models for a better
tunnelexecutor.Best fits
The closest match is the module-federation family.
The other very relevant one is:
It has the same overall shape as
nx-forge:tunnel:That is a good pattern to keep.
Most useful idea
The biggest improvement comes from how Nx sometimes avoids nested executor state entirely.
runExecutor(...)nxprocess and explicitly overrides env, includingWEBPACK_SERVE: 'false'That feels very relevant here, because the tunnel problems are caused by leaked task state like
NX_TASK_TARGET_*,NX_BUILD_TARGET, andWEBPACK_SERVE.So if
nx-forge:tunnelneeds to:servetargetsthen borrowing this pattern would help a lot:
What I’d copy
From Nx, we could borrow these concrete ideas:
combineAsyncIterables(...)plus explicit readiness checks, like ssr-dev-server.impl.ts.How I’d apply it to
nx-forge:tunnelWe could keep the high-level flow, but change the boundaries:
forge tunnelwith a scrubbed env.One important prioritization point: the Forge subprocess env cleanup is still a good idea, but the nested Nx build isolation is the critical fix for the concrete output-path bug above. The wrong bundle location is already happening during the nested build phase, before the
forge tunnelsubprocess itself matters.Beta Was this translation helpful? Give feedback.
All reactions