You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This results in node-agent pods also getting same annotations which must be bug, because for node-agent pods, there is separate option nodeAgentPodMonitor with its own annotations.
This causes Datadog agent errors in openmetrics configuration because ad.datadoghq.com/velero.checks the string velero is valid only for the main velero pod, for node-agents it should be 'node-agent' as this must match container indentifier.
Datadog agent shows openmetrics error:
=============
Autodiscovery
=============
Enabled Features
================
containerd
cri
docker
kube_orchestratorexplorer
kubernetes
Configuration Errors
====================
velero/node-agent-4mz97 (0f7ecb9a-b7b2-4439-a6bd-d1555507b2e4)
--------------------------------------------------------------
annotation ad.datadoghq.com/velero.checks is invalid: velero doesn't match a container identifier [node-agent]
But Velero Helm chart also supports specifying annotations for the node-agents separately. Maybe this will help? After adding following under metrics in values.yaml:
metrics.podAnnotations should only apply to main Velero pod
metrics.nodeAgentPodMonitor.annotations should be applied to node agent pods.
setting metrics.nodeAgentPodMonitor.enabled to false should not enable node agent prometheus metrics nor write wrong annotations!
Environment:
helm version (use helm version): version.BuildInfo{Version:"v3.15.4", GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4", GitTreeState:"clean", GoVersion:"go1.22.6"}
helm chart version and app version (use helm list -n <YOUR NAMESPACE>): helm list -n velero
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
velero velero 34 2025-01-28 16:57:34.542298964 +0200 EET deployed velero-8.3.0 1.15.2
Kubernetes version (use kubectl version): Client Version: v1.30.9
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.5
The text was updated successfully, but these errors were encountered:
What steps did you take and what happened:
I have enabled annotations for Datadog agent automatic discovery for the velero pod only.
This results in node-agent pods also getting same annotations which must be bug, because for node-agent pods, there is separate option nodeAgentPodMonitor with its own annotations.
This causes Datadog agent errors in openmetrics configuration because
ad.datadoghq.com/velero.checks
the string velero is valid only for the main velero pod, for node-agents it should be 'node-agent' as this must match container indentifier.Datadog agent shows openmetrics error:
But Velero Helm chart also supports specifying annotations for the node-agents separately. Maybe this will help? After adding following under metrics in values.yaml:
After running
helm upgrade velero vmware-tanzu/velero --namespace velero --values values.yaml
All custom annotations are gone now from node-agents:
Lets experiment more. What would happen if setting
metrics.nodeAgentPodMonitor.enabled
to false?The main Velero pod annotations appear again on node!
What did you expect to happen:
metrics.podAnnotations
should only apply to main Velero podmetrics.nodeAgentPodMonitor.annotations
should be applied to node agent pods.metrics.nodeAgentPodMonitor.enabled
to false should not enable node agent prometheus metrics nor write wrong annotations!Environment:
helm version (use
helm version
): version.BuildInfo{Version:"v3.15.4", GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4", GitTreeState:"clean", GoVersion:"go1.22.6"}helm chart version and app version (use
helm list -n <YOUR NAMESPACE>
): helm list -n veleroNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
velero velero 34 2025-01-28 16:57:34.542298964 +0200 EET deployed velero-8.3.0 1.15.2
Kubernetes version (use
kubectl version
): Client Version: v1.30.9Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.5
The text was updated successfully, but these errors were encountered: