I've noticed some errors in metricbeat logs after activatig rabbitmq module.
I'm using 6.2.3 metricbeat inside a docker container.
The rabbitmq version against which I'm testing the module is 3.7.4.
2018-03-27T12:26:05.574Z INFO cfgfile/reload.go:219 Loading of config files completed.
2018-03-27T12:26:05.607Z ERROR schema/schema.go:41 Error on field 'count': Missing field: count, Error: Key disk_reads not found
2018-03-27T12:26:05.607Z ERROR schema/schema.go:41 Error on field 'count': Missing field: count, Error: Key disk_writes not found
2018-03-27T12:26:05.607Z ERROR schema/schema.go:41 Error on field 'count': Missing field: count, Error: Key disk_writes not found
2018-03-27T12:26:05.607Z ERROR schema/schema.go:41 Error on field 'count': Missing field: count, Error: Key disk_reads not found
Also noticed that connection metric is not available in metricbeat 6.2.3 for rabbitmq module.
Could you please have a look and advice?
The connection metricset will only be available in 6.3. For the above error it seems some disk information is missing. It could be because of the version you are using or because of OS. Is this only showing up once or persists over time?
In any case, could you open an issue on our Github repo about this? https://github.com/elastic/beats It would be great if you could provide also your OS and perhaps the configs on how you start / run rabbitmq to make it reproducible on our end.
Thanks for your reply. These errors persist over time, most probably at every rabbitmq module check.
I'm running the rabbitmq inside a container, using an official rabbitmq docker image, 3.7-management. Docker is running on top of CentOS 7.4.
As you suggested I've opened an issue in github and provided all the steps to reproduce the issue - https://github.com/elastic/beats/issues/6685
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.