Metricbeat prometheus module collector metricset escaping

Hi, i'm using metricbeat 7.10.1 and i want to get data with prometheus module(golang prometheus client) and send data to kafka output.
Here's my yaml file.

  metricbeat.yml: |-
    metricbeat.autodiscover:
      providers:
        - type: kubernetes
          include_annotations: ["prometheus.io.scrape"]
          node: ${NODE_NAME}
          templates:
            - condition:
                contains:
                  kubernetes.annotations.prometheus.io/scrape: "true"
              config:
                - module: prometheus
                  metricsets: ["collector"]
                  hosts: "${data.host}:${data.port}"
                  period: 30s
                  query:
                    format: prometheus
                  metrics_path: /api/v1/test/metrics
                  fields:
                    topic: test_apm-prom
                    host_type: kubernetes
                  service.name: test
    output.kafka:
      hosts: ["x.x.x.x.:9092"]
      topic: 'test_topic'
      partition.round_robin:
        reachable_only: false
      required_acks: 1
      compression: gzip
      max_message_bytes: 1000000

My question is if i using metricsets: ["collector"], my mericbeat request /api/v1/test/metrics, and it return this result:

promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
go_gc_duration_seconds_sum 0.063136861
go_gc_duration_seconds_count 1487
go_goroutines 12
go_memstats_alloc_bytes 4.780744e+06

and here is problem.
My return data contains { and }, so when i use output.kafka, data seperated and metricbeat send data to kafka several times.
Is there any way to escape these { and } ?

Hi @4orty,

Do you mean that the same data is duplicated? or that Metricbeat sends multiple events for each request to prometheus?

It is expected that Metricbeat sends multiple events for each request to Prometheus, one for every set of labels (the values between { and }).

So for example, for the case you mention I would expect at least three events:

  • One for metrics with code="500" (they will contain a field prometheus.labels.code: 500).
  • Another one for metrics with code="503" (they will contain a field prometheus.labels.code: 503).
  • And a last one for metrics without labels.
1 Like

Hi @jsoriano, yes!! i got three events in my case.

Two with label, and one without label.
I just want to send my prometheus data to kafka in one events, by escaping {} characters.
Is it possible?
now, too many events created(twelve kafka events per one prometheus request)

No, this is not possible with current implementation.

Metrics with different labels are sent in different events so it is possible to filter metrics with the same labels. E.g. in this case you may want to show a graph with the values of promhttp_metric_handler_requests_total per status code. For that the query needs to group by prometheus.labels.code.

I see that with a different implementation that escapes the labels you could have something like promhttp_metric_handler_requests_total_code_500 and promhttp_metric_handler_requests_total_code_503, and for this case it could work well as you could query for the specific codes you are interested.
But this approach poses some problems in other cases:

  • What to do for metrics that have more than one label.
  • Each label can have many values, using them as part of the metric name would lead to fields explosion in the index mapping.

Is this producing some problem? Take into account that by having this many (smaller) events your metrics are stored in a format that is more convenient for indexing and querying.

1 Like

Thank you very much for your kind reply.
I understood what you say.
In fact, the reason why i wanted this feature is that we are no need to query based on labels.
So i just little uncomfortable that too many events per one prometheus request make many duplicate data and take large amount of storage.
But now i realized there is no way to make one events per one promethues in Beats.

Thanks for your reply!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.