Prometheus metrics and alerting

Hello,

we use Prometheus integration to ship metrics from our Kubernetes cluster via Elastic Agent (using Prometheus remote-write feature) and we're facing certain problems.

There's two fields associated with metrics when exploring in Discover, a counter field and a rate field. While the counter field gets properly updated (as we'd expect) the rate field is always set to 0, for all metrics that we ship to Elastic from our k8s cluster. Would you be able to help us determine what is the cause of this?

Additionally, we tried to set up alert rules that would trigger alert based on prometheus metrics threshold, but didn't find suitable rule types/functions for our requirements.

Example: We'd like to define a rule which fires an alert when the rate of failed requests goes over a certain threshold.

How can we achieve this?

Thank you,
Mislav

Hi Mislav, Elastic uses kube state metrics to fetch container metrics. I assume you also having one end point usually like prometheus metric exporter.

Could you please show the output of that exporter ?

Also you can set alerting in kibana as per your threshold.

Hi,

sorry, I'm talking about custom metrics here, which are exposed via /metrics endpoint and scrapped by Prometheus before they are pushed to Elastic using Prometheus integration (remote-write feature) Prometheus | Documentation.

For alert rules, the best we managed to come up with is to aggregate the data using DSL query, which seems to give us correct values when aggregated into buckets using 'date_histogram' aggregation. But we didn't find a way to utilize these value in an alert rule.

The problem that we're trying to solve here is calculating the increase value of time-series counter metrics that we get from Prometheus and then fire an alert when this increase in counter value exceeds a certain numeric threshold.