How to separate input with jmx input plugin?

Hello all,

I'm trying to set up some monitoring of a confluentinc/cp-server-connect:7.6.0 docker container and the connectors I run within that. I am using the input-jmx plugin.
Now I would like to separate the pipelines for every connector as they have different attributes/metrics.
My pipelines.yml looks like this:

- pipeline.id: ODataV2Source
  path.config: "/etc/logstash/pipeline/odatav2source.conf"

odatav2source.conf input part looks like:

# ==================
# 1) INPUT
# ==================
input {
  jmx {
    path => "/usr/share/logstash/jmxconf"
    polling_frequency => 15
    nb_thread => 4
    type => "jmx"
  }
}

and my odatav2source.json in /usr/share/logstash/jmxconf :

{
  "host": "connect",
  "port": 9999,
  "alias": "kafka.connect",
  "queries": [
    {
      "object_name": "org.init.ohja.kafka.connect:connector=*,task=*,type=odatav2-source-task-metrics",
      "attributes": [
        "0-entityset",
        "0-position",
        "0-service",
        "0-service-url",
        "0-topic",
        "0-uri-type",
        "active-subscriptions",
        "last-extraction",
        "retries"
      ],
      "object_alias": "${type}.${connector}.${task}"
    },...

now I would like to have a .conf and .json file for every connector and also possibly put every connectors metrics in a separate index to keep thing clean and separated.
However setting the path in the jmx input to "/usr/share/logstash/jmxconf/odatav2source.json" does not work and results in this error:
[ODataV2Source][many numbers] Not a directory - /usr/share/logstash/jmxconf/odatav2source.json

Any help in setting this up would be appreciated!
Best regards
David

I don't see why you want a separate input for each JMX object, but if you do you will need to use a different directory for the configuration file of each one.

If you set an alias like that then the events will have fields like

  "metric_path" => "kafka.connect.foo.bar.baz",
  "metric_value_string" => "Yowzer"

You could do something like

 mutate { add_field => { "[@metadata][routing]" => "%{metric_path}" } }
 mutate { gsub => [ "[@metadata][routing]", "^(\w+\.\w+).*", "\1" ] }

which gets the alias into that field. You can then use pipeline-to-pipeline with a distributor pattern based on the alias to get each type in its own pipeline.

This use of two fields to contain the metric_path and metric_value reminds me of JMS, where you get a hash containing request and value fields, and the value field is hash full of response objects. To actually use the data you need to completely restructure things. Of course the way to restructure things will depend on your use case. It is unlikely to be as simple as

mutate { add_field => { "%{metric_path}" => "%{metric_value_string}" } }

not least because sometimes that's going to have to be
"%{metric_value_number}".