Skip to content

Helm chart kube-rbac-proxy issues #425

@rkevin-arch

Description

@rkevin-arch

This is kind of a repeat of #404, which was closed because the author didn't have time to make a PR. I tried making one but it turned out to be more complicated than I have expected and I currently don't have enough time either, so I'm listing some of the observations I've seen that need addressing.

#335 made kube-rbac-proxy optional and is telling the metrics port to bind to localhost only if kube-rbac-proxy is enabled. Prometheus ServiceMonitor metrics scraping is broken after that, although I suspect it hasn't worked as intended to begin with (it scrapes, but that's because the port is not protected despite having kube-rbac-proxy in front of it).

  • Is port 8080 only used for metrics or other things too? I suppose there are no admission webhooks defined, so maybe this port isn't used for anything else. Would be nice to confirm though
  • If kube-rbac-proxy is enabled, port 8080 should not be exposed at all. The port definition should be removed from the pod and svc (and vice versa, if kube-rbac-proxy is disabled, port 8443 should be removed from the pod and svc)
    • The port numbers are specified in values.yaml as service.port and service.metricsPort, but there are a couple of hardcoded port numbers in the helm chart using 8080/8443 directly
    • Should we even have allowed disabling the kube-rbac-proxy in the first place? Allowing this choice means the client needs to talk to a different port, switch between http and https, and specify / not specify auth. Sounds like something that we should enforce if possible
  • What does kube-rbac-proxy actually do anyway? This helm chart just runs it with no config file for it (didn't even realize that is possible). I think this means it just ensures whoever talks to it must have a valid ServiceAccount token in the cluster, but then proceed to do no actual RBAC at all. Is this intended?
  • Making the ServiceMonitor scrape metrics with kube-rbac-proxy is annoying because there isn't a ServiceAccount defined with a secret attached to put in the ServiceMonitor's spec.endpoints.authorization.credentials. The helm chart should likely provide a ServiceAccount, a Role/RoleBinding that kube-rbac-proxy trusts to let through, a Secret with type kubernetes.io/service-account-token, and configure the ServiceMonitor to use that secret for auth. fix servicemonitor when rbac enabled #404 likely created the Secret / ServiceAccount outside of helm.

If someone can clarify some of these questions (such as if the port is only used for metrics and what the intent of using kube-rbac-proxy is), I can try making a PR to address this, although it'll likely not be anytime soon.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions