Agent version:
7.78.0
Bug report
When the agent runs in an environment where no workloadmeta collector is applicable — e.g. a container sidecar that is not on Kubernetes, has no Docker/containerd/CRI-O socket, is not on ECS, and has no GPU — autodiscovery logs a spurious ERROR on every startup:
CORE | ERROR | (comp/core/autodiscovery/autodiscoveryimpl/autoconfig.go:176 in func1) | Workloadmeta collectors are not ready after 18 retries: workloadmeta not initialized, starting check scheduler controller anyway.
The same datadog agent container didn't have the above-mentioned ERROR log when 7.76.1 base image was used
The following env variables are unset for the datadog agent process
- DD_KUBERNETES_KUBELET_HOST
- KUBERNETES_SERVICE_HOST
- KUBERNETES_SERVICE_HOST
- KUBERNETES
The message is cosmetic — the agent continues — but:
- It's alarming to users and flagged by log-based alerts.
- It fires on every pod restart, making it hard to spot real workloadmeta issues in the signal.
- There is no documented configuration to silence it short of filtering the agent's stdout.
Reproduction
Any container where no workloadmeta feature is detected
Also reproduced in production in a GKE cluster as a sidecar after explicitly unsetting KUBERNETES_SERVICE_PORT, KUBERNETES_SERVICE_HOST, KUBERNETES, and DD_KUBERNETES_KUBELET_HOST in the container entrypoint. Confirmed via /proc//environ that the agent process sees none of those variables.
Expected behavior
No startup error when no workloadmeta collector is applicable to the environment. The agent should proceed silently in that case, the same way it does when workloadmeta has genuine data to report.
Agent version:
7.78.0
Bug report
When the agent runs in an environment where no workloadmeta collector is applicable — e.g. a container sidecar that is not on Kubernetes, has no Docker/containerd/CRI-O socket, is not on ECS, and has no GPU — autodiscovery logs a spurious ERROR on every startup:
CORE | ERROR | (comp/core/autodiscovery/autodiscoveryimpl/autoconfig.go:176 in func1) | Workloadmeta collectors are not ready after 18 retries: workloadmeta not initialized, starting check scheduler controller anyway.
The same datadog agent container didn't have the above-mentioned ERROR log when 7.76.1 base image was used
The following env variables are unset for the datadog agent process
The message is cosmetic — the agent continues — but:
Reproduction
Any container where no workloadmeta feature is detected
Also reproduced in production in a GKE cluster as a sidecar after explicitly unsetting KUBERNETES_SERVICE_PORT, KUBERNETES_SERVICE_HOST, KUBERNETES, and DD_KUBERNETES_KUBELET_HOST in the container entrypoint. Confirmed via /proc//environ that the agent process sees none of those variables.
Expected behavior
No startup error when no workloadmeta collector is applicable to the environment. The agent should proceed silently in that case, the same way it does when workloadmeta has genuine data to report.