[Helm] Chart Component Configuration Isolation#2472
Conversation
|
Hi @swuferhong, can you please review the pull request. Thanks |
d62e30e to
92c4ce9
Compare
affo
left a comment
There was a problem hiding this comment.
Hello and thanks for the contribution!
This is extremely valuable!
Is there any specific reason why moving some keys out of tablet and coordinator?
There is already tablet and coordinator, now we find ourself with tabletServer and coordinatorServer.
We should opt for a specific design here:
either we adopt a key-component approach or a component-key approach.
Where key-component would be, for example:
resources:
tablet:
coordinator:and component-key would be:
tablet:
resources:
coordinator:
resources:It seems here you are opting for component-key, and it sounds solid to me 🤝
However, I would leave replicas to tablet and coordinator and move all additions you made to the respective area.
I think a nice follow up PR would be to also move resources to the same pattern 🤝
I am trying to push new contribution guidelines for helm, see #2846, so, this PR should add minimal Helm tests under tests, as well as updating the official docs under https://github.com/apache/fluss/blob/main/website/docs/install-deploy/deploying-with-helm.md (as this PR introduces new configuration values, those should be documented 🤝 )
|
@hemanthsavasere just for the sake of pickiness:
Was addressed already with #2834 :) |
This reverts commit d956e75.
…pache#2863) This reverts commit d956e75.
Purpose
Linked issue: close #2191
This PR introduces component-specific configuration sections for the Fluss Helm chart, enabling users to independently configure replicas and health probes for coordinator and tablet servers.
###Previously,
.Values.resources.tabletServerinstead of.Values.resources.coordinatorServerChanges
Added two new top-level configuration sections:
Also updated the
resourcessection to use explicit empty defaults instead of being commented out:replicas: 1with{{ .Values.coordinatorServer.replicas }}.Values.coordinatorServer.livenessProbe.*references.Values.coordinatorServer.readinessProbe.*references.Values.resources.tabletServerto.Values.resources.coordinatorServerreplicas: 3with{{ .Values.tabletServer.replicas }}.Values.tabletServer.livenessProbe.*references.Values.tabletServer.readinessProbe.*referencesTests
Below tests are successful
Deploy with different replica counts per component - verified by deploying with different replica counts per component
Configure different probe settings per component - verified each component has independent probe configuration.
Resource bug fix (coordinator uses correct resources) - bug fix verified. Coordinator now correctly uses resources.coordinatorServer.
Pod health and startup - all components started successfully, registered with ZooKeeper, and coordinator detected both tablet servers.
Replica Scaling - verified the tablet servers scaled independently
Independent Resource Modification - verified each component updated with different resources independently
Documentation