Metricbeat http module cant parse query key with period in it

query key-values such as resource.kafka.id: XXXX are being encoded as resource=map%5Bkafka%3Amap%5Bid%3AXXXX%5D%5D instead of resource.kafka.id=XXXX

This is my metricbeat http module configuration:

- module: http
  metricsets:
    - json
  period: 1m
  hosts: ["https://api.telemetry.confluent.cloud:443"]
  ssl.verification_mode: "none"
  namespace: "http"
  path: "/v2/metrics/cloud/export"
  query:
    resource.kafka.id: XXXX
  method: "GET"
  username: ...
  password: ...

Hey @Nimar_Arora,

I think that there is a general issue with configuration keys that contains dots, see https://github.com/elastic/beats/issues/27079

Would using a more json-like syntax work?

  query: { "resource.kafka.id": XXXX }

Thanks for the link to the existing issue and the suggestion. The JSON-like syntax didn't work, unfortunately. It produced the same problem URL.

I also tried the following:

  query:
    { 'resource%2Ekafka%2Eid': 'XXXX' }

Which resulted in the following query being sent:

resource%252Ekafka%252Eid=XXXX

If there was a way to tell elastic not to encode the query then this last approach might have worked.

As this is a GET request, it might work if you put the whole url in hosts, did you try something like this?

- module: http
  metricsets:
    - json
  period: 1m
  hosts: ["https://api.telemetry.confluent.cloud:443/v2/metrics/cloud/export?resource.kafka.id=XXXX"]
  ssl.verification_mode: "none"
  namespace: "http"
  method: "GET"
  username: ...
  password: ...

Yup I tried that as well. This time the error is invalid character '#' looking for beginning of value

Where there is a #? In the XXXX?

There are # in comments in the next block!

Umm, this is weird, could you share the config including these comments? (Obfuscated if needed)

OK, I tried this again, here is the full error message:

metricbeat[19013]: {"log.level":"error","@timestamp":"2022-08-25T09:34:55.013Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset http.json: invalid character '#' looking for beginning of value","service.name":"metricbeat","ecs.version":"1.6.0"}

And here is the full config file /etc/metricbeat/modules.d/http.yml

# Module: http
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/8.3/metricbeat-module-http.html

- module: http
  metricsets:
    - json
  period: 1m
  hosts: ["https://api.telemetry.confluent.cloud:443/v2/metrics/cloud/export?resource.kafka.id=XXX-XXXX"]
  #path: "/v2/metrics/cloud/export"
  #query: {"resource.kafka.id": "XXX-XXXX"}
  ssl.verification_mode: "none"
  namespace: "confluent"
  method: "GET"
  username: YYYY
  password: ZZZZ
  #request.enabled: false
  #response.enabled: false
  #json.is_array: false
  #dedot.enabled: false

- module: http
  #metricsets:
  #  - server
  host: "localhost"
  port: "8080"
  enabled: false
  #paths:
  #  - path: "/foo"
  #    namespace: "foo"
  #    fields: # added to the the response in root. overwrites existing fields
  #      key: "value"

Ah ok, I think this error comes from the parsing of the response received from the server, from here, or here.

Btw, do you know if the response received is an array? Then json.is_array: true should be used.

Would you have a chance to capture this response? Maybe using something like https://mitmproxy.org/, or modifying Metricbeat to log what is trying to parse.

Hmm... the response doesn't really look like a JSON object at all. Here is what it looks like

# HELP confluent_kafka_server_received_bytes The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_received_bytes gauge
confluent_kafka_server_received_bytes{kafka_id="XXXX",topic="XXXX",} N.N NNNNN
confluent_kafka_server_received_bytes{kafka_id="XXXX",topic="XXXXX",} N.N NNNNN

So it seems like the problem is that i was incorrect in using the JSON metricset here. You answer above technically solves the issue with the key having a dot in it, so I will mark that as the solution. My problem of getting these metrics into elastic still remains to be solved :slight_smile:

Oh, they look like prometheus metrics, try with the Prometheus module then :slight_smile:

With the collector metricset specifically Prometheus collector metricset | Metricbeat Reference [8.3] | Elastic

1 Like

Wow, yes, that works!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.