I am retrieving statistics from a CEPH cluster running in a kubernetes deployment using this values file:
metricbeat:
extraEnvs:
- name: CEPH_API_USERNAME
value: monitoring-ceph
- name: CEPH_API_PASSWORD
valueFrom:
secretKeyRef:
name: ceph-api-user
key: monitoring-ceph
config:
metricbeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
templates:
- condition.equals:
kubernetes.labels.rook_cluster: "rook-ceph"
config:
- module: ceph
metricsets:
- mgr_cluster_disk
- mgr_osd_perf
- mgr_pool_disk
- mgr_osd_pool_stats
- mgr_osd_tree
period: 10s
hosts: ["https://${data.host}:8003"]
username: '${CEPH_API_USERNAME}'
password: '${CEPH_API_PASSWORD}'
ssl.verification_mode: "none"
- module: prometheus
period: 10s
hosts: ["${data.host}:9283"]
metrics_path: /metrics
metrics_filters:
include: ["ceph_osd_stat_byte*"]
This works as expected, now I want to add a second CEPH cluster, so I apply a second values file:
metricbeat:
extraEnvs:
- name: CEPH_API_USERNAME
value: monitoring-ceph
- name: CEPH_API_PASSWORD
valueFrom:
secretKeyRef:
name: ceph-rdg-api-user
key: monitoring-ceph
config:
metricbeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
templates:
- condition.equals:
kubernetes.labels.rook_cluster: "rdg-rook-ceph"
config:
- module: ceph
metricsets:
- mgr_cluster_disk
- mgr_osd_perf
- mgr_pool_disk
- mgr_osd_pool_stats
- mgr_osd_tree
period: 10s
hosts: ["https://${data.host}:8003"]
username: '${CEPH_API_USERNAME}'
password: '${CEPH_API_PASSWORD}'
ssl.verification_mode: "none"
- module: prometheus
period: 10s
hosts: ["${data.host}:9283"]
metrics_path: /metrics
metrics_filters:
include: ["ceph_osd_stat_byte*"]
Now, it appears that the last values file applied is the only one that is working. Is this the correct way to use autodiscover? The only difference between the two files is kubernetes.labels.rook_cluster
which distinguishes the two clusters.