Metricbeat and RabbitMQ

I've set up rabbitmq monitoring with Metricbeat. And this is what i have in logs:

error in mapping: error applying schema: 22 errors: key mnesia_disk_tx_countnot found; keymnesia_ram_tx_countnot found; keygc_bytes_reclaimednot found; keygc_numnot found; keyio_file_handle_open_attempt_avg_timenot found; keyio_file_handle_open_attempt_countnot found; keyio_read_avg_timenot found; keyio_read_bytesnot found; keyio_read_countnot found; keyio_read_countnot found; keyio_seek_avg_timenot found; keyio_seek_countnot found; keyio_sync_countnot found; keyio_sync_avg_timenot found; keyio_write_avg_timenot found; keyio_write_bytesnot found; keyio_write_countnot found; keyqueue_index_write_countnot found; keyqueue_index_journal_write_countnot found; keyqueue_index_read_countnot found; keymsg_store_read_countnot found; keymsg_store_write_count not found

RabbitMQ version: 3.3.5
Erlang R16B03-1

Hi @tennaen!

It looks like the mapping is Elasticsearch is not updated.
Could you try to delete the mapping from ES and then setup again Metricbeat so as to setup the updated mappings?

Regards,
C.

@ChrsMark thank you for your response.
I've deleted all entries for metricbeat: indexes and templates - still got same error.
This is my metricbeat.yml elasticoutput configuration:

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["ip:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "monitor"
  password: "${monitor.password}"

  index: "metricbeat-focusdev01"
setup.template.name: "metricbeat-rabbitmq"
setup.template.pattern: "metricbeat-focus*"

And rabbitmq.yml configuration:

- module: rabbitmq
  metricsets:
    - exchange
    - node
    - queue
    - connection
  period: 10s
  hosts: ["10.2.197.85:15672"]
  username: focusdev
  password: ${focusdev.password}

And also i wanted to create custom index for my metricbeat, but default one is created.

Maybe the post Custom index name does not work on metricbeat could be of help.

But still i have a problem with mappings.

In order to debug it you need to isolate the problem. Did you manage to write events to the defined index?

Could you verify that events are sent to the desired index or not?

If so, is the desired index properly set up? Can you check the mapping and see if the keys that are reported to be failing are in the mapping? (you can do it from Kibana)

@ChrsMark yes, i've managed to write to defined index.
Index was created by metricbeats using template.

{
  "rabbitmq" : {
    "order" : 0,
    "index_patterns" : [
      "rabbitmq-*"
    ],
    "settings" : {
      "index" : {
        "lifecycle" : {
          "name" : "rabbitmq-lifecycle-policy",
          "rollover_alias" : "rabbitmq-dev"
        }
      }
    },
    "mappings" : {
      "_routing" : {
        "required" : false
      },
      "numeric_detection" : false,
      "dynamic_date_formats" : [
        "strict_date_optional_time",
        "yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"
      ],
      "_meta" : { },
      "dynamic" : true,
      "_source" : {
        "excludes" : [ ],
        "includes" : [ ],
        "enabled" : true
      },
      "date_detection" : true,
      "properties" : { }
    },
    "aliases" : { }
  }
}

I cannot find those keys in mappings.

A little update. When i type GET in dev console

GET rabbitmq-dev/_mapping/field/rabbitmq.node.*

I get this:

{
  "rabbitmq-dev--000001" : {
    "mappings" : {
      "rabbitmq.node.queue.index.write.count" : {
        "full_name" : "rabbitmq.node.queue.index.write.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.processors" : {
        "full_name" : "rabbitmq.node.processors",
        "mapping" : {
          "processors" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.mnesia.disk.tx.count" : {
        "full_name" : "rabbitmq.node.mnesia.disk.tx.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.write.avg.ms" : {
        "full_name" : "rabbitmq.node.io.write.avg.ms",
        "mapping" : {
          "ms" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.sync.avg.ms" : {
        "full_name" : "rabbitmq.node.io.sync.avg.ms",
        "mapping" : {
          "ms" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.sync.count" : {
        "full_name" : "rabbitmq.node.io.sync.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.write.count" : {
        "full_name" : "rabbitmq.node.io.write.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.msg.store_read.count" : {
        "full_name" : "rabbitmq.node.msg.store_read.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.read.bytes" : {
        "full_name" : "rabbitmq.node.io.read.bytes",
        "mapping" : {
          "bytes" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.name" : {
        "full_name" : "rabbitmq.node.name",
        "mapping" : {
          "name" : {
            "type" : "keyword",
            "ignore_above" : 1024
          }
        }
      },
      "rabbitmq.node.io.seek.count" : {
        "full_name" : "rabbitmq.node.io.seek.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.proc.used" : {
        "full_name" : "rabbitmq.node.proc.used",
        "mapping" : {
          "used" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.socket.used" : {
        "full_name" : "rabbitmq.node.socket.used",
        "mapping" : {
          "used" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.mem.used.bytes" : {
        "full_name" : "rabbitmq.node.mem.used.bytes",
        "mapping" : {
          "bytes" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.fd.used" : {
        "full_name" : "rabbitmq.node.fd.used",
        "mapping" : {
          "used" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.mem.limit.bytes" : {
        "full_name" : "rabbitmq.node.mem.limit.bytes",
        "mapping" : {
          "bytes" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.read.count" : {
        "full_name" : "rabbitmq.node.io.read.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.gc.reclaimed.bytes" : {
        "full_name" : "rabbitmq.node.gc.reclaimed.bytes",
        "mapping" : {
          "bytes" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.socket.total" : {
        "full_name" : "rabbitmq.node.socket.total",
        "mapping" : {
          "total" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.write.bytes" : {
        "full_name" : "rabbitmq.node.io.write.bytes",
        "mapping" : {
          "bytes" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.uptime" : {
        "full_name" : "rabbitmq.node.uptime",
        "mapping" : {
          "uptime" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.run.queue" : {
        "full_name" : "rabbitmq.node.run.queue",
        "mapping" : {
          "queue" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.proc.total" : {
        "full_name" : "rabbitmq.node.proc.total",
        "mapping" : {
          "total" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.read.avg.ms" : {
        "full_name" : "rabbitmq.node.io.read.avg.ms",
        "mapping" : {
          "ms" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.fd.total" : {
        "full_name" : "rabbitmq.node.fd.total",
        "mapping" : {
          "total" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.msg.store_write.count" : {
        "full_name" : "rabbitmq.node.msg.store_write.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.mnesia.ram.tx.count" : {
        "full_name" : "rabbitmq.node.mnesia.ram.tx.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.seek.avg.ms" : {
        "full_name" : "rabbitmq.node.io.seek.avg.ms",
        "mapping" : {
          "ms" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.reopen.count" : {
        "full_name" : "rabbitmq.node.io.reopen.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.type" : {
        "full_name" : "rabbitmq.node.type",
        "mapping" : {
          "type" : {
            "type" : "keyword",
            "ignore_above" : 1024
          }
        }
      },
      "rabbitmq.node.io.file_handle.open_attempt.count" : {
        "full_name" : "rabbitmq.node.io.file_handle.open_attempt.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.gc.num.count" : {
        "full_name" : "rabbitmq.node.gc.num.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.disk.free.limit.bytes" : {
        "full_name" : "rabbitmq.node.disk.free.limit.bytes",
        "mapping" : {
          "bytes" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.disk.free.bytes" : {
        "full_name" : "rabbitmq.node.disk.free.bytes",
        "mapping" : {
          "bytes" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.queue.index.read.count" : {
        "full_name" : "rabbitmq.node.queue.index.read.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.io.file_handle.open_attempt.avg.ms" : {
        "full_name" : "rabbitmq.node.io.file_handle.open_attempt.avg.ms",
        "mapping" : {
          "ms" : {
            "type" : "long"
          }
        }
      },
      "rabbitmq.node.queue.index.journal_write.count" : {
        "full_name" : "rabbitmq.node.queue.index.journal_write.count",
        "mapping" : {
          "count" : {
            "type" : "long"
          }
        }
      }
    }
  }
}

And it looks like those keys can be found in mappings.

For example mnesia_disk_tx_count is found, but in format mnesia.disk.tx.count

I think I found the problem. Could you please provide more logging output? I'm afraid that it hasn't to do with ES mapping (my fault here, sorry) but the error comes at https://github.com/elastic/beats/blob/b8d10dca8bfe49144fb13b1d71077012464c6e02/metricbeat/module/rabbitmq/node/node.go#L110. This means that the data Metricbeat gets from Rabbitmq are not able to match the mapping. For instance this had happened with other too at https://github.com/elastic/beats/pull/6887. Could you confirm?

C.

@ChrsMark here is message from metricbeat log:

2020-02-27T09:34:15.217+0100    INFO    module/wrapper.go:252   Error fetching data for metricset rabbitmq.node: error in mapping: e
rror applying schema: 22 errors: key `mnesia_disk_tx_count` not found; key `mnesia_ram_tx_count` not found; key `msg_store_read_coun
t` not found; key `msg_store_write_count` not found; key `io_read_avg_time` not found; key `io_read_bytes` not found; key `io_read_c
ount` not found; key `io_read_count` not found; key `io_seek_avg_time` not found; key `io_seek_count` not found; key `io_sync_avg_ti
me` not found; key `io_sync_count` not found; key `io_write_avg_time` not found; key `io_write_bytes` not found; key `io_write_count
` not found; key `io_file_handle_open_attempt_count` not found; key `io_file_handle_open_attempt_avg_time` not found; key `gc_bytes_
reclaimed` not found; key `gc_num` not found; key `queue_index_journal_write_count` not found; key `queue_index_read_count` not foun
d; key `queue_index_write_count` not found
2020-02-27T09:34:15.217+0100    DEBUG   [processors]    processing/processors.go:186    Publish event: {
  "@timestamp": "2020-02-27T08:34:15.206Z",
  "@metadata": {
    "beat": "metricbeat",
    "type": "_doc",
    "version": "7.6.0"
  },
  "error": {
    "message": "error in mapping: error applying schema: 22 errors: key `mnesia_disk_tx_count` not found; key `mnesia_ram_tx_count`
not found; key `msg_store_read_count` not found; key `msg_store_write_count` not found; key `io_read_avg_time` not found; key `io_re
ad_bytes` not found; key `io_read_count` not found; key `io_read_count` not found; key `io_seek_avg_time` not found; key `io_seek_co
unt` not found; key `io_sync_avg_time` not found; key `io_sync_count` not found; key `io_write_avg_time` not found; key `io_write_by
tes` not found; key `io_write_count` not found; key `io_file_handle_open_attempt_count` not found; key `io_file_handle_open_attempt_
avg_time` not found; key `gc_bytes_reclaimed` not found; key `gc_num` not found; key `queue_index_journal_write_count` not found; ke
y `queue_index_read_count` not found; key `queue_index_write_count` not found"
  },
  "host": {
    "id": "8e36bdfa5ef1438f8057c5ae39f99af4",
    "containerized": false,
    "name": "osx02656",
    "hostname": "osx02656",
    "architecture": "x86_64",
    "os": {
      "family": "redhat",
      "name": "Red Hat Enterprise Linux Server",
      "kernel": "3.10.0-862.11.6.el7.x86_64",
      "codename": "Maipo",
      "platform": "rhel",
      "version": "7.5 (Maipo)"
    }
  },
  "agent": {
    "type": "metricbeat",
    "ephemeral_id": "12524069-57ed-4258-abbe-0f23d06a6af3",
    "hostname": "osx02656",
    "id": "bec5d274-1efd-41f4-bb26-f2c12c40613f",
    "version": "7.6.0"
  },
  "ecs": {
    "version": "1.4.0"
  },
  "event": {
    "duration": 11073446,
    "dataset": "rabbitmq.node",
    "module": "rabbitmq"
  },
  "metricset": {
    "name": "node",
    "period": 10000
  },
  "service": {
    "address": "10.2.197.86:15672",
    "type": "rabbitmq"
  }
}

And this one, for queue dataset:

2020-02-27T09:34:15.207+0100    DEBUG   [schema]        schema/schema.go:64     ignoring error for key "pct": wrong format in `consu
mer_utilisation`: expected integer, found string
2020-02-27T09:34:15.207+0100    ERROR   [rabbitmq.queue]        queue/data.go:97        error in mapping: error applying schema: 2 errors: key `exclusive` not found; key `messages_persistent` not found
2020-02-27T09:34:15.207+0100    DEBUG   [schema]        schema/schema.go:64     ignoring error for key "pct": wrong format in `consumer_utilisation`: expected integer, found string
2020-02-27T09:34:15.207+0100    ERROR   [rabbitmq.queue]        queue/data.go:97        error in mapping: error applying schema: 2 errors: key `exclusive` not found; key `messages_persistent` not found
2020-02-27T09:34:15.207+0100    DEBUG   [processors]    processing/processors.go:186    Publish event: {
  "@timestamp": "2020-02-27T08:34:15.199Z",
  "@metadata": {
    "beat": "metricbeat",
    "type": "_doc",
    "version": "7.6.0"
  },
  "host": {
    "containerized": false,
    "hostname": "osx02656",
    "name": "osx02656",
    "architecture": "x86_64",
    "os": {
      "name": "Red Hat Enterprise Linux Server",
      "kernel": "3.10.0-862.11.6.el7.x86_64",
      "codename": "Maipo",
      "platform": "rhel",
      "version": "7.5 (Maipo)",
      "family": "redhat"
    },
    "id": "8e36bdfa5ef1438f8057c5ae39f99af4"
  },
  "agent": {
    "version": "7.6.0",
    "type": "metricbeat",
    "ephemeral_id": "12524069-57ed-4258-abbe-0f23d06a6af3",
    "hostname": "osx02656",
    "id": "bec5d274-1efd-41f4-bb26-f2c12c40613f"
  },
  "metricset": {
      "name": "queue",
    "period": 10000
  },
  "service": {
    "type": "rabbitmq",
    "address": "10.2.197.86:15672"
  },
  "event": {
    "dataset": "rabbitmq.queue",
    "module": "rabbitmq",
    "duration": 7674862
  },
  "error": {
    "message": "error applying schema: 2 errors: key `exclusive` not found; key `messages_persistent` not found"
  },
  "ecs": {
    "version": "1.4.0"
  }
}

Thank you!

It is what I mentioned before. What you get from Rabbitmq does not match the desired mapping. Similar to what was happening at https://github.com/elastic/beats/pull/6887.

So is there anything i can do with it?

You can investigate if this is normal on rabbitmq's side. If this is sth that can happen you can open a GH issue/PR suggesting to fix it by making those keys optional, like the other one I posted.

Thanks!

@ChrsMark looks like this problem is related to RabbitMQ version. I run another metricbeat to monitor RabbitMQ 3.7.14 cluster - no error messages.

Ah yeah, right.

I don't see 3.3.5 listed in the docs :https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-rabbitmq.html#_compatibility_39

C.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.