I'm not entirely sure what is causing this.
autodiscover is configured like so:
- type: kubernetes
include_annotations: ["prometheus.io.scrape", "prometheus.io.port"]
resource: service
templates:
- condition:
and:
- contains:
kubernetes.annotations.prometheus.io/scrape: "true"
- equals:
kubernetes.service.name: "infinispan"
config:
- module: prometheus
fields_under_root: true
fields:
event.dataset: prometheus.infinispan
period: 20s
hosts: ["${data.host}:${data.kubernetes.annotations.prometheus.io/port}"]
metrics_path: /metrics
metricsets: ["collector"]
- condition:
and:
- contains:
kubernetes.annotations.prometheus.io/scrape: "true"
- not:
equals:
kubernetes.service.name: "infinispan"
config:
- module: prometheus
period: 20s
# Prometheus exporter host / port
hosts: ["${data.host}:${data.port}"]
metrics_path: /metrics
metricsets: ["collector"]
nothing too crazy.
Metrics DO come in, but they are coming in anywhere from 20-25 mins apart. This is very much the case for our galera metrics:
Monitoring our databases is... very important as you can imagine.
Metricbeat logs reveal nothing out of the ordinary. I checked elasticsearch logs for any errors coming in, and it's pretty clean.
Is there something I have overlooked here? I've been scratching my head and pulling my hair out over this for the past 3 days.
ANY help would be very much appreciated.