-
Notifications
You must be signed in to change notification settings - Fork 0
chore(deps): update prom-stack-prod for prod env #187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
ixxeL2097
wants to merge
1
commit into
main
Choose a base branch
from
renovate/helm/prom-stack-prod
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- main/kube-prometheus-stack_talos_manifests_prom-stack_prod_manifest_main.yaml 2025-03-21 01:07:05.055022820 +0000
+++ pr/kube-prometheus-stack_talos_manifests_prom-stack_prod_manifest_pr.yaml 2025-03-21 01:06:56.650994758 +0000
@@ -1,177 +1,177 @@
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
name: kube-prometheus-stack-grafana
namespace: github-runner
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
labels:
- helm.sh/chart: kube-state-metrics-5.29.0
+ helm.sh/chart: kube-state-metrics-5.30.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "2.14.0"
+ app.kubernetes.io/version: "2.15.0"
release: kube-prometheus-stack
name: kube-prometheus-stack-kube-state-metrics
namespace: github-runner
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-prometheus-stack-prometheus-node-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-node-exporter-4.43.1
+ helm.sh/chart: prometheus-node-exporter-4.44.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: prometheus-node-exporter
app.kubernetes.io/name: prometheus-node-exporter
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "1.8.2"
+ app.kubernetes.io/version: "1.9.0"
release: kube-prometheus-stack
automountServiceAccountToken: false
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/alertmanager/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-prometheus-stack-alertmanager
namespace: github-runner
labels:
app: kube-prometheus-stack-alertmanager
app.kubernetes.io/name: kube-prometheus-stack-alertmanager
app.kubernetes.io/component: alertmanager
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
automountServiceAccountToken: true
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-prometheus-stack-operator
namespace: github-runner
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
app: kube-prometheus-stack-operator
app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
app.kubernetes.io/component: prometheus-operator
automountServiceAccountToken: true
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-prometheus-stack-prometheus
namespace: github-runner
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/name: kube-prometheus-stack-prometheus
app.kubernetes.io/component: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
automountServiceAccountToken: true
---
# Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-prometheus-stack-prometheus-blackbox-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-blackbox-exporter-9.3.0
+ helm.sh/chart: prometheus-blackbox-exporter-9.4.0
app.kubernetes.io/name: prometheus-blackbox-exporter
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/version: "v0.26.0"
app.kubernetes.io/managed-by: Helm
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: kube-prometheus-stack-grafana
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
type: Opaque
data:
admin-user: "YWRtaW4="
admin-password: "cGFzc3dvcmQ="
ldap-toml: ""
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/alertmanager/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: alertmanager-kube-prometheus-stack-alertmanager
namespace: github-runner
labels:
app: kube-prometheus-stack-alertmanager
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
data:
alertmanager.yaml: "Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0KaW5oaWJpdF9ydWxlczoKLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIC0gYWxlcnRuYW1lCiAgc291cmNlX21hdGNoZXJzOgogIC0gc2V2ZXJpdHkgPSBjcml0aWNhbAogIHRhcmdldF9tYXRjaGVyczoKICAtIHNldmVyaXR5ID1+IHdhcm5pbmd8aW5mbwotIGVxdWFsOgogIC0gbmFtZXNwYWNlCiAgLSBhbGVydG5hbWUKICBzb3VyY2VfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IHdhcm5pbmcKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIHNvdXJjZV9tYXRjaGVyczoKICAtIGFsZXJ0bmFtZSA9IEluZm9JbmhpYml0b3IKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBhbGVydG5hbWUgPSBJbmZvSW5oaWJpdG9yCnJlY2VpdmVyczoKLSBuYW1lOiAibnVsbCIKcm91dGU6CiAgZ3JvdXBfYnk6CiAgLSBuYW1lc3BhY2UKICBncm91cF9pbnRlcnZhbDogNW0KICBncm91cF93YWl0OiAzMHMKICByZWNlaXZlcjogIm51bGwiCiAgcmVwZWF0X2ludGVydmFsOiAxMmgKICByb3V0ZXM6CiAgLSBtYXRjaGVyczoKICAgIC0gYWxlcnRuYW1lID0gIldhdGNoZG9nIgogICAgcmVjZWl2ZXI6ICJudWxsIgp0ZW1wbGF0ZXM6Ci0gL2V0Yy9hbGVydG1hbmFnZXIvY29uZmlnLyoudG1wbA=="
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap-dashboard-provider.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
name: kube-prometheus-stack-grafana-config-dashboards
namespace: github-runner
data:
provider.yaml: |-
apiVersion: 1
providers:
- name: 'sidecarProvider'
orgId: 1
type: file
disableDeletion: false
@@ -181,24 +181,24 @@
foldersFromFilesStructure: true
path: /tmp/dashboards
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-prometheus-stack-grafana
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
data:
plugins: grafana-piechart-panel,grafana-polystat-panel,grafana-clock-panel
grafana.ini: |
[analytics]
check_for_updates = true
[grafana_net]
url = https://grafana.net
[log]
mode = console
@@ -306,58 +306,58 @@
"https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-views-pods.json" \
> "/var/lib/grafana/dashboards/grafana-dashboards-kubernetes/k8s-views-pods.json"
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-argocd
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
dashboard-provider: grafana-dashboards-argocd
data:
{}
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-kubernetes
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
dashboard-provider: grafana-dashboards-kubernetes
data:
{}
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/grafana/configmaps-datasources.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-prometheus-stack-grafana-datasource
namespace: github-runner
labels:
grafana_datasource: "1"
app: kube-prometheus-stack-grafana
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: "Prometheus"
type: prometheus
uid: prometheus
url: http://kube-prometheus-stack-prometheus.github-runner:9090/
@@ -375,21 +375,21 @@
handleGrafanaManagedAlerts: false
implementation: prometheus
---
# Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-prometheus-stack-prometheus-blackbox-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-blackbox-exporter-9.3.0
+ helm.sh/chart: prometheus-blackbox-exporter-9.4.0
app.kubernetes.io/name: prometheus-blackbox-exporter
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/version: "v0.26.0"
app.kubernetes.io/managed-by: Helm
data:
blackbox.yaml: |
modules:
http_2xx:
http:
follow_redirects: true
@@ -398,42 +398,42 @@
- HTTP/1.1
- HTTP/2.0
prober: http
timeout: 5s
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
name: kube-prometheus-stack-grafana-clusterrole
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps", "secrets"]
verbs: ["get", "watch", "list"]
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
- helm.sh/chart: kube-state-metrics-5.29.0
+ helm.sh/chart: kube-state-metrics-5.30.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "2.14.0"
+ app.kubernetes.io/version: "2.15.0"
release: kube-prometheus-stack
name: kube-prometheus-stack-kube-state-metrics
rules:
- apiGroups: ["certificates.k8s.io"]
resources:
- certificatesigningrequests
verbs: ["list", "watch"]
- apiGroups: [""]
@@ -573,23 +573,23 @@
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-prometheus-stack-operator
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
app: kube-prometheus-stack-operator
app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
app.kubernetes.io/component: prometheus-operator
rules:
- apiGroups:
- monitoring.coreos.com
resources:
- alertmanagers
@@ -683,23 +683,23 @@
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-prometheus-stack-prometheus
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
rules:
# This permission are not in the kube-prometheus repo
# they're grabbed from https://github.com/prometheus/prometheus/blob/master/documentation/examples/rbac-setup.yml
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
@@ -717,68 +717,68 @@
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
verbs: ["get"]
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kube-prometheus-stack-grafana-clusterrolebinding
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
subjects:
- kind: ServiceAccount
name: kube-prometheus-stack-grafana
namespace: github-runner
roleRef:
kind: ClusterRole
name: kube-prometheus-stack-grafana-clusterrole
apiGroup: rbac.authorization.k8s.io
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
- helm.sh/chart: kube-state-metrics-5.29.0
+ helm.sh/chart: kube-state-metrics-5.30.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "2.14.0"
+ app.kubernetes.io/version: "2.15.0"
release: kube-prometheus-stack
name: kube-prometheus-stack-kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-prometheus-stack-kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-prometheus-stack-kube-state-metrics
namespace: github-runner
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-prometheus-stack-operator
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
app: kube-prometheus-stack-operator
app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
app.kubernetes.io/component: prometheus-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-prometheus-stack-operator
subjects:
@@ -789,103 +789,103 @@
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-prometheus-stack-prometheus
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-prometheus-stack-prometheus
subjects:
- kind: ServiceAccount
name: kube-prometheus-stack-prometheus
namespace: github-runner
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kube-prometheus-stack-grafana
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
rules: []
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-prometheus-stack-grafana
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kube-prometheus-stack-grafana
subjects:
- kind: ServiceAccount
name: kube-prometheus-stack-grafana
namespace: github-runner
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-grafana
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
spec:
type: ClusterIP
ports:
- name: http-web
port: 80
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-kube-state-metrics
namespace: github-runner
labels:
- helm.sh/chart: kube-state-metrics-5.29.0
+ helm.sh/chart: kube-state-metrics-5.30.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "2.14.0"
+ app.kubernetes.io/version: "2.15.0"
release: kube-prometheus-stack
annotations:
spec:
type: "ClusterIP"
ports:
- name: "http"
protocol: TCP
port: 8080
targetPort: 8080
@@ -893,27 +893,27 @@
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-prometheus-node-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-node-exporter-4.43.1
+ helm.sh/chart: prometheus-node-exporter-4.44.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: prometheus-node-exporter
app.kubernetes.io/name: prometheus-node-exporter
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "1.8.2"
+ app.kubernetes.io/version: "1.9.0"
release: kube-prometheus-stack
jobLabel: node-exporter
annotations:
prometheus.io/scrape: "true"
spec:
type: ClusterIP
ports:
- port: 9100
targetPort: 9100
protocol: TCP
@@ -927,23 +927,23 @@
kind: Service
metadata:
name: kube-prometheus-stack-alertmanager
namespace: github-runner
labels:
app: kube-prometheus-stack-alertmanager
self-monitor: "true"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
ports:
- name: http-web
port: 9093
targetPort: 9093
protocol: TCP
- name: reloader-web
appProtocol: http
@@ -959,23 +959,23 @@
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-coredns
labels:
app: kube-prometheus-stack-coredns
jobLabel: coredns
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
namespace: kube-system
spec:
clusterIP: None
ports:
- name: http-metrics
port: 9153
protocol: TCP
targetPort: 9153
@@ -986,23 +986,23 @@
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-kube-controller-manager
labels:
app: kube-prometheus-stack-kube-controller-manager
jobLabel: kube-controller-manager
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
namespace: kube-system
spec:
clusterIP: None
ports:
- name: http-metrics
port: 10257
protocol: TCP
targetPort: 10257
@@ -1014,23 +1014,23 @@
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-kube-proxy
labels:
app: kube-prometheus-stack-kube-proxy
jobLabel: kube-proxy
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
namespace: kube-system
spec:
clusterIP: None
ports:
- name: http-metrics
port: 10249
protocol: TCP
targetPort: 10249
@@ -1042,23 +1042,23 @@
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-kube-scheduler
labels:
app: kube-prometheus-stack-kube-scheduler
jobLabel: kube-scheduler
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
namespace: kube-system
spec:
clusterIP: None
ports:
- name: http-metrics
port: 10259
protocol: TCP
targetPort: 10259
@@ -1069,23 +1069,23 @@
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-operator
namespace: github-runner
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
app: kube-prometheus-stack-operator
app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
app.kubernetes.io/component: prometheus-operator
spec:
ports:
- name: https
port: 443
targetPort: https
@@ -1099,23 +1099,23 @@
kind: Service
metadata:
name: kube-prometheus-stack-prometheus
namespace: github-runner
labels:
app: kube-prometheus-stack-prometheus
self-monitor: "true"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
ports:
- name: http-web
port: 9090
targetPort: 9090
- name: reloader-web
appProtocol: http
port: 8080
@@ -1127,21 +1127,21 @@
sessionAffinity: None
type: "ClusterIP"
---
# Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-prometheus-blackbox-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-blackbox-exporter-9.3.0
+ helm.sh/chart: prometheus-blackbox-exporter-9.4.0
app.kubernetes.io/name: prometheus-blackbox-exporter
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/version: "v0.26.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 9115
targetPort: http
protocol: TCP
@@ -1151,63 +1151,63 @@
app.kubernetes.io/name: prometheus-blackbox-exporter
app.kubernetes.io/instance: kube-prometheus-stack
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-prometheus-stack-prometheus-node-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-node-exporter-4.43.1
+ helm.sh/chart: prometheus-node-exporter-4.44.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: prometheus-node-exporter
app.kubernetes.io/name: prometheus-node-exporter
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "1.8.2"
+ app.kubernetes.io/version: "1.9.0"
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: prometheus-node-exporter
app.kubernetes.io/instance: kube-prometheus-stack
revisionHistoryLimit: 10
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
labels:
- helm.sh/chart: prometheus-node-exporter-4.43.1
+ helm.sh/chart: prometheus-node-exporter-4.44.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: prometheus-node-exporter
app.kubernetes.io/name: prometheus-node-exporter
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "1.8.2"
+ app.kubernetes.io/version: "1.9.0"
release: kube-prometheus-stack
jobLabel: node-exporter
spec:
automountServiceAccountToken: false
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-prometheus-stack-prometheus-node-exporter
containers:
- name: node-exporter
- image: quay.io/prometheus/node-exporter:v1.8.2
+ image: quay.io/prometheus/node-exporter:v1.9.0
imagePullPolicy: IfNotPresent
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --path.rootfs=/host/root
- --path.udev.data=/host/root/run/udev/data
- --web.listen-address=[$(HOST_IP)]:9100
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
- --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
securityContext:
@@ -1284,50 +1284,51 @@
hostPath:
path: /
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-prometheus-stack-grafana
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
strategy:
type: RollingUpdate
template:
metadata:
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
annotations:
checksum/config: 66aa9decfacf413aeb07dd69a61ae5c3027b9f6e8f27e212bed467d4a235d5d8
- checksum/dashboards-json-config: f3f3881e62b00bf2df5ad970735e46633e85c169a792d4f9b388904dc3a599cb
+ checksum/dashboards-json-config: 6af983f74bd9713610868906a23b4d4ad6f4a2aeeac4090ea422a4d616cac54e
checksum/sc-dashboard-provider-config: e3aca4961a8923a0814f12363c5e5e10511bb1deb6cd4e0cbe138aeee493354f
checksum/secret: 7590fe10cbd3ae3e92a60625ff270e3e7d404731e1c73aaa2df1a78dab2c7768
kubectl.kubernetes.io/default-container: grafana
spec:
serviceAccountName: kube-prometheus-stack-grafana
automountServiceAccountToken: true
+ shareProcessNamespace: false
securityContext:
fsGroup: 472
runAsGroup: 472
runAsNonRoot: true
runAsUser: 472
initContainers:
- name: download-dashboards
image: "docker.io/curlimages/curl:8.9.1"
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
@@ -1342,21 +1343,21 @@
type: RuntimeDefault
volumeMounts:
- name: config
mountPath: "/etc/grafana/download_dashboards.sh"
subPath: download_dashboards.sh
- name: storage
mountPath: "/var/lib/grafana"
enableServiceLinks: true
containers:
- name: grafana-sc-dashboard
- image: "quay.io/kiwigrid/k8s-sidecar:1.28.0"
+ image: "quay.io/kiwigrid/k8s-sidecar:1.30.0"
imagePullPolicy: IfNotPresent
env:
- name: METHOD
value: WATCH
- name: LABEL
value: "grafana_dashboard"
- name: LABEL_VALUE
value: "1"
- name: FOLDER
value: "/tmp/dashboards"
@@ -1384,21 +1385,21 @@
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: sc-dashboard-volume
mountPath: "/tmp/dashboards"
- name: grafana-sc-datasources
- image: "quay.io/kiwigrid/k8s-sidecar:1.28.0"
+ image: "quay.io/kiwigrid/k8s-sidecar:1.30.0"
imagePullPolicy: IfNotPresent
env:
- name: METHOD
value: WATCH
- name: LABEL
value: "grafana_datasource"
- name: LABEL_VALUE
value: "1"
- name: FOLDER
value: "/etc/grafana/provisioning/datasources"
@@ -1422,21 +1423,21 @@
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: sc-datasources-volume
mountPath: "/etc/grafana/provisioning/datasources"
- name: grafana
- image: "docker.io/grafana/grafana:11.5.1"
+ image: "docker.io/grafana/grafana:11.5.2"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: config
@@ -1528,66 +1529,66 @@
emptyDir:
{}
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-prometheus-stack-kube-state-metrics
namespace: github-runner
labels:
- helm.sh/chart: kube-state-metrics-5.29.0
+ helm.sh/chart: kube-state-metrics-5.30.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "2.14.0"
+ app.kubernetes.io/version: "2.15.0"
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
replicas: 1
strategy:
type: RollingUpdate
revisionHistoryLimit: 10
template:
metadata:
labels:
- helm.sh/chart: kube-state-metrics-5.29.0
+ helm.sh/chart: kube-state-metrics-5.30.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "2.14.0"
+ app.kubernetes.io/version: "2.15.0"
release: kube-prometheus-stack
spec:
automountServiceAccountToken: true
hostNetwork: false
serviceAccountName: kube-prometheus-stack-kube-state-metrics
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-state-metrics
args:
- --port=8080
- --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
imagePullPolicy: IfNotPresent
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.14.0
+ image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0
ports:
- containerPort: 8080
name: "http"
livenessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
path: /livez
port: 8080
scheme: HTTP
@@ -1618,60 +1619,60 @@
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-prometheus-stack-operator
namespace: github-runner
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
app: kube-prometheus-stack-operator
app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
app.kubernetes.io/component: prometheus-operator
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: kube-prometheus-stack-operator
release: "kube-prometheus-stack"
template:
metadata:
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
app: kube-prometheus-stack-operator
app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
app.kubernetes.io/component: prometheus-operator
spec:
containers:
- name: kube-prometheus-stack
- image: "quay.io/prometheus-operator/prometheus-operator:v0.80.0"
+ image: "quay.io/prometheus-operator/prometheus-operator:v0.80.1"
imagePullPolicy: "IfNotPresent"
args:
- --kubelet-service=kube-system/kube-prometheus-stack-kubelet
- --kubelet-endpoints=true
- --kubelet-endpointslice=false
- --localhost=127.0.0.1
- - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.80.0
+ - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.80.1
- --config-reloader-cpu-request=0
- --config-reloader-cpu-limit=0
- --config-reloader-memory-request=0
- --config-reloader-memory-limit=0
- --thanos-default-base-image=quay.io/thanos/thanos:v0.37.2
- --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
- --web.enable-tls=true
- --web.cert-file=/cert/cert
- --web.key-file=/cert/key
- --web.listen-address=:10250
@@ -1730,21 +1731,21 @@
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
---
# Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-prometheus-stack-prometheus-blackbox-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-blackbox-exporter-9.3.0
+ helm.sh/chart: prometheus-blackbox-exporter-9.4.0
app.kubernetes.io/name: prometheus-blackbox-exporter
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/version: "v0.26.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: prometheus-blackbox-exporter
@@ -1810,24 +1811,24 @@
configMap:
name: kube-prometheus-stack-prometheus-blackbox-exporter
---
# Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kube-prometheus-stack-grafana
namespace: github-runner
labels:
- helm.sh/chart: grafana-8.9.0
+ helm.sh/chart: grafana-8.10.1
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "11.5.1"
+ app.kubernetes.io/version: "11.5.2"
annotations:
cert-manager.io/cluster-issuer: "vault-issuer"
cert-manager.io/common-name: "grafana.k8s-infra.fredcorp.com"
spec:
ingressClassName: nginx
tls:
- hosts:
- grafana.k8s-infra.fredcorp.com
secretName: grafana-tls-cert
rules:
@@ -1849,23 +1850,23 @@
annotations:
cert-manager.io/cluster-issuer: vault-issuer
cert-manager.io/common-name: prometheus.k8s-infra.fredcorp.com
name: kube-prometheus-stack-prometheus
namespace: github-runner
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
ingressClassName: nginx
rules:
- host: "prometheus.k8s-infra.fredcorp.com"
http:
paths:
- path: /
pathType: Prefix
@@ -1879,21 +1880,21 @@
- prometheus.k8s-infra.fredcorp.com
secretName: prometheus-tls-cert
---
# Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kube-prometheus-stack-prometheus-blackbox-exporter
namespace: github-runner
labels:
- helm.sh/chart: prometheus-blackbox-exporter-9.3.0
+ helm.sh/chart: prometheus-blackbox-exporter-9.4.0
app.kubernetes.io/name: prometheus-blackbox-exporter
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/version: "v0.26.0"
app.kubernetes.io/managed-by: Helm
annotations:
cert-manager.io/cluster-issuer: vault-issuer
cert-manager.io/common-name: blackbox.k8s-infra.fredcorp.com
spec:
ingressClassName: nginx
tls:
@@ -1916,28 +1917,28 @@
apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
name: kube-prometheus-stack-alertmanager
namespace: github-runner
labels:
app: kube-prometheus-stack-alertmanager
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
- image: "quay.io/prometheus/alertmanager:v0.28.0"
- version: v0.28.0
+ image: "quay.io/prometheus/alertmanager:v0.28.1"
+ version: v0.28.1
replicas: 1
listenLocal: false
serviceAccountName: kube-prometheus-stack-alertmanager
automountServiceAccountToken: true
externalUrl: http://kube-prometheus-stack-alertmanager.github-runner:9093
paused: false
logFormat: "logfmt"
logLevel: "info"
retention: "120h"
alertmanagerConfigSelector: {}
@@ -1967,23 +1968,23 @@
kind: MutatingWebhookConfiguration
metadata:
name: kube-prometheus-stack-admission
annotations:
labels:
app: kube-prometheus-stack-admission
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
app.kubernetes.io/component: prometheus-operator-webhook
webhooks:
- name: prometheusrulemutate.monitoring.coreos.com
failurePolicy: Ignore
rules:
- apiGroups:
- monitoring.coreos.com
@@ -2007,36 +2008,36 @@
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: kube-prometheus-stack-prometheus
namespace: github-runner
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
automountServiceAccountToken: true
alerting:
alertmanagers:
- namespace: github-runner
name: kube-prometheus-stack-alertmanager
port: http-web
pathPrefix: "/"
apiVersion: v2
- image: "quay.io/prometheus/prometheus:v3.1.0"
- version: v3.1.0
+ image: "quay.io/prometheus/prometheus:v3.2.1"
+ version: v3.2.1
externalUrl: "http://prometheus.k8s-infra.fredcorp.com/"
paused: false
replicas: 1
shards: 1
logLevel: info
logFormat: logfmt
listenLocal: false
enableAdminAPI: false
scrapeInterval: 30s
retention: "7d"
@@ -2094,23 +2095,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-alertmanager.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: alertmanager.rules
rules:
- alert: AlertmanagerFailedReload
annotations:
description: Configuration has failed to load for {{ $labels.namespace }}/{{ $labels.pod}}.
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
@@ -2237,23 +2238,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-config-reloaders
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: config-reloaders
rules:
- alert: ConfigReloaderSidecarErrors
annotations:
description: 'Errors encountered while the {{$labels.pod}} config-reloader sidecar attempts to sync config in {{$labels.namespace}} namespace.
@@ -2269,23 +2270,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-general.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: general.rules
rules:
- alert: TargetDown
annotations:
description: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.'
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/general/targetdown
@@ -2337,23 +2338,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-k8s.rules.container-cpu-usage-seconds-tot
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: k8s.rules.container_cpu_usage_seconds_total
rules:
- expr: |-
sum by (cluster, namespace, pod, container) (
irate(container_cpu_usage_seconds_total{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}[5m])
) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
@@ -2365,23 +2366,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-k8s.rules.container-memory-cache
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: k8s.rules.container_memory_cache
rules:
- expr: |-
container_memory_cache{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
* on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2392,23 +2393,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-k8s.rules.container-memory-rss
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: k8s.rules.container_memory_rss
rules:
- expr: |-
container_memory_rss{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
* on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2419,23 +2420,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-k8s.rules.container-memory-swap
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: k8s.rules.container_memory_swap
rules:
- expr: |-
container_memory_swap{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
* on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2446,23 +2447,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-k8s.rules.container-memory-working-set-by
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: k8s.rules.container_memory_working_set_bytes
rules:
- expr: |-
container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
* on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2473,23 +2474,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-k8s.rules.container-resource
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: k8s.rules.container_resource
rules:
- expr: |-
kube_pod_container_resource_requests{resource="memory",job="kube-state-metrics"} * on (namespace, pod, cluster)
group_left() max by (namespace, pod, cluster) (
(kube_pod_status_phase{phase=~"Pending|Running"} == 1)
@@ -2562,23 +2563,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-k8s.rules.pod-owner
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: k8s.rules.pod_owner
rules:
- expr: |-
max by (cluster, namespace, workload, pod) (
label_replace(
label_replace(
@@ -2630,23 +2631,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-apiserver-availability.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- interval: 3m
name: kube-apiserver-availability.rules
rules:
- expr: avg_over_time(code_verb:apiserver_request_total:increase1h[30d]) * 24 * 30
record: code_verb:apiserver_request_total:increase30d
- expr: sum by (cluster, code) (code_verb:apiserver_request_total:increase30d{verb=~"LIST|GET"})
@@ -2760,23 +2761,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-apiserver-burnrate.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: kube-apiserver-burnrate.rules
rules:
- expr: |-
(
(
# too slow
@@ -3082,23 +3083,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-apiserver-histogram.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: kube-apiserver-histogram.rules
rules:
- expr: histogram_quantile(0.99, sum by (cluster, le, resource) (rate(apiserver_request_sli_duration_seconds_bucket{job="apiserver",verb=~"LIST|GET",subresource!~"proxy|attach|log|exec|portforward"}[5m]))) > 0
labels:
quantile: '0.99'
verb: read
@@ -3113,23 +3114,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-apiserver-slos
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: kube-apiserver-slos
rules:
- alert: KubeAPIErrorBudgetBurn
annotations:
description: The API server is burning too much error budget on cluster {{ $labels.cluster }}.
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeapierrorbudgetburn
@@ -3190,23 +3191,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-prometheus-general.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: kube-prometheus-general.rules
rules:
- expr: count without(instance, pod, node) (up == 1)
record: count:up1
- expr: count without(instance, pod, node) (up == 0)
record: count:up0
@@ -3215,23 +3216,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-prometheus-node-recording.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: kube-prometheus-node-recording.rules
rules:
- expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[3m])) BY (instance)
record: instance:node_cpu:rate:sum
- expr: sum(rate(node_network_receive_bytes_total[3m])) BY (instance)
record: instance:node_network_receive_bytes:rate:sum
@@ -3248,23 +3249,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-scheduler.rules
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stack-69.2.4
+ chart: kube-prometheus-stack-69.8.2
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
groups:
- name: kube-scheduler.rules
rules:
- expr: histogram_quantile(0.99, sum(rate(scheduler_e2e_scheduling_duration_seconds_bucket{job="kube-scheduler"}[5m])) without(instance, pod))
labels:
quantile: '0.99'
record: cluster_quantile:scheduler_e2e_scheduling_duration_seconds:histogram_quantile
@@ -3305,23 +3306,23 @@
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-kube-state-metrics
namespace: github-runner
labels:
app: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
- app.kubernetes.io/version: "69.2.4"
+ app.kubernetes.io/version: "69.8.2"
app.kubernetes.io/part-of: kube-prometheus-stack
- chart: kube-prometheus-stac
[Truncated: Diff output was too large]
|
9753c24
to
5a7a62c
Compare
5a7a62c
to
33febda
Compare
3d45751
to
aa51153
Compare
62124bf
to
5277d77
Compare
e39a0d3
to
58df9d1
Compare
510cb0f
to
f4ab742
Compare
0474dfe
to
b037a62
Compare
b037a62
to
f49e7df
Compare
f9693c9
to
24206f4
Compare
32c0a77
to
79d8dbc
Compare
f49e7df
to
e04486f
Compare
e04486f
to
0a88773
Compare
0a88773
to
9c68934
Compare
329ef55
to
3201b16
Compare
3201b16
to
bf03a96
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
69.2.4
->69.8.2
9.3.0
->9.4.0
Release Notes
prometheus-community/helm-charts (kube-prometheus-stack)
v69.8.2
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@alertmanager-1.15.2...kube-prometheus-stack-69.8.2
v69.8.1
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.8.0...kube-prometheus-stack-69.8.1
v69.8.0
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.7.4...kube-prometheus-stack-69.8.0
v69.7.4
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@prometheus-pingdom-exporter-3.0.3...kube-prometheus-stack-69.7.4
v69.7.3
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@prometheus-adapter-4.13.0...kube-prometheus-stack-69.7.3
v69.7.2
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.7.1...kube-prometheus-stack-69.7.2
v69.7.1
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@prometheus-pingdom-exporter-3.0.2...kube-prometheus-stack-69.7.1
v69.7.0
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.6.1...kube-prometheus-stack-69.7.0
v69.6.1
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@prometheus-operator-admission-webhook-0.20.0...kube-prometheus-stack-69.6.1
v69.6.0
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@prometheus-snmp-exporter-7.0.1...kube-prometheus-stack-69.6.0
v69.5.2
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@prom-label-proxy-0.10.2...kube-prometheus-stack-69.5.2
v69.5.1
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@prometheus-operator-crds-18.0.1...kube-prometheus-stack-69.5.1
v69.5.0
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@prometheus-operator-admission-webhook-0.19.0...kube-prometheus-stack-69.5.0
v69.4.1
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.4.0...kube-prometheus-stack-69.4.1
v69.4.0
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@prometheus-rabbitmq-exporter-2.1.1...kube-prometheus-stack-69.4.0
v69.3.3
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
tpl
support for additional secret names by @richardtief in https://github.com/prometheus-community/helm-charts/pull/5339Full Changelog: prometheus-community/helm-charts@prometheus-rabbitmq-exporter-2.1.0...kube-prometheus-stack-69.3.3
v69.3.2
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
Full Changelog: prometheus-community/helm-charts@prometheus-elasticsearch-exporter-6.6.1...kube-prometheus-stack-69.3.2
v69.3.1
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.3.0...kube-prometheus-stack-69.3.1
v69.3.0
Compare Source
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
What's Changed
New Contributors
Full Changelog: prometheus-community/helm-charts@prometheus-json-exporter-0.16.0...kube-prometheus-stack-69.3.0
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR has been generated by Renovate Bot.