We've been running Kibana and ES 5.6 with the basic license for quite a few months for the monitoring feature. For some reason now when I go to the monitoring section I get the dreaded "No monitoring data found" error. I've searched the forums and seen that others are having this issue, but none of those topics have resolved the issue for me.
I can see that both .monitoring-es-6* and .monitoring-kibana-6* indices are being written regularly and appear to be current. I've also restarted both ES and Kibana as well as deleted all of the .monitoring-* indices and am still seeing the same issue.
When you look at the /_nodes/settings API for the "monitored" cluster, do you see any settings such as:
xpack.monitoring.collection.interval
xpack.monitoring.exporters
xpack.monitoring.enabled
If the enabled setting is there, it should not be false for any of the nodes in the cluster.
If the collection.interval setting is there, it should be at least 10 seconds. If it is -1, Monitoring is disabled for that node.
If the exporters setting is there, it should be the same for all of the nodes in the cluster AND Kibana has to "point" to the monitoring cluster through the xpack.monitoring.elasticsearch.url setting in kibana.yml.
Collection interval and exporters should be the same for every node in the cluster.
If you don't have monitoring exporters configured for the nodes, you should not have a xpack.monitoring.elasticsearch.url setting in kibana.yml.
As @tsullivan is getting at, there's a few things that can go wrong here:
X-Pack monitoring can be disabled via the node's settings. But that appears to be completely using the defaults.
If you are / are not defining xpack.monitoring.elasticsearch.url within your kibana.yml, then the Monitoring UI may be looking at the wrong cluster for monitoring data.
With the cluster settings, it's possible to change the exporter(s) to point to a different location. The node settings that you added to this post suggest that it is using the local exporter, which keeps data in the same cluster. If someone overrode this in the cluster settings, then it may be going to a different cluster. This can be done dynamically, so it may explain a sudden change.
That's pretty normal. If X-Pack monitoring settings have not been messed with, then it's enabled by default and it uses the local exporter (routes documents to the same cluster).
The things left to check are number 2 from above, and:
GET /_cat/indices/.monitoring*?v
If this returns no indices, then this is likely the cause.
If Kibana is not seeing it, then that seems like the cause. There is a possibility that the monitoring indices are missing some data, but the index document counts look pretty consistent so I doubt it.
So here's another data-point. We run Kibana as Docker images. As an experiment I kicked off another Docker image of Kibana running 5.6.6 (same as our ES node). Sure enough, monitoring returned. Digging deeper into our existing Kibana setup we were running an older Kibana (5.4.3). Once I redeployed it with the newer version the monitoring returned.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.