Hi!
At the moment I am collecting metrics from my OpenShift cluster with the collector metricset like so:
metricbeat.modules:
- module: prometheus
period: 15s
timeout: 15s
hosts: ["https://prometheus-k8s.openshift-monitoring.svc.cluster.local:9091"]
metrics_path: '/federate'
query:
'match[]': '{__name__=" *and a bunch of different metrics that I am interested in*"}'
However, I stumbled upon that it's possible to run a query on the prometheus server instead.
I could do this, for example:
- module: prometheus
period: 15s
timeout: 15s
hosts: ["https://prometheus-k8s.openshift-monitoring.svc.cluster.local:9091"]
metricsets: ["query"]
queries:
- name: 'cluster_operator_up'
path: '/api/v1/query'
params:
query: "cluster_operator_up"
Is there a preferred way to do this?
Do you think the different metricsets could generate a different load depending on how many metrics I am querying or collecting?
Right now the OpenShift cluster is small (3 master, 3 worker) but it will scale out in the future. I have deployed metricbeat as a single pod with a Deployment. It outputs the data to an external Logstash cluster.
Any insight would be much appreciated!