No monitoring data found

We've been running Kibana and ES 5.6 with the basic license for quite a few months for the monitoring feature. For some reason now when I go to the monitoring section I get the dreaded "No monitoring data found" error. I've searched the forums and seen that others are having this issue, but none of those topics have resolved the issue for me.

I can see that both .monitoring-es-6* and .monitoring-kibana-6* indices are being written regularly and appear to be current. I've also restarted both ES and Kibana as well as deleted all of the .monitoring-* indices and am still seeing the same issue.

Any help is appreciated!

1 Like

When you look at the /_nodes/settings API for the "monitored" cluster, do you see any settings such as:

xpack.monitoring.collection.interval
xpack.monitoring.exporters
xpack.monitoring.enabled
  • If the enabled setting is there, it should not be false for any of the nodes in the cluster.
  • If the collection.interval setting is there, it should be at least 10 seconds. If it is -1, Monitoring is disabled for that node.
  • If the exporters setting is there, it should be the same for all of the nodes in the cluster AND Kibana has to "point" to the monitoring cluster through the xpack.monitoring.elasticsearch.url setting in kibana.yml.

Collection interval and exporters should be the same for every node in the cluster.

If you don't have monitoring exporters configured for the nodes, you should not have a xpack.monitoring.elasticsearch.url setting in kibana.yml.

/_nodes/settings is one of the Nodes Info APIs: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-info.html

Hope that helps!

Checking out your suggestions, I've noticed that the only xpack settings we have are:

                "xpack": {
                    "ml": {
                        "enabled": "false"
                    },
                    "security": {
                        "enabled": "false"
                    },
                    "watcher": {
                        "enabled": "false"
                    }
                }

How would you suggest we move forward?

Other than the cluster's node settings, do you have an xpack.monitoring.elasticsearch.url setting in Kibana.yml?

@alexvollmer

Could you provide the output for:

GET /_cluster/settings

as well?

As @tsullivan is getting at, there's a few things that can go wrong here:

  1. X-Pack monitoring can be disabled via the node's settings. But that appears to be completely using the defaults.
  2. If you are / are not defining xpack.monitoring.elasticsearch.url within your kibana.yml, then the Monitoring UI may be looking at the wrong cluster for monitoring data.
  3. With the cluster settings, it's possible to change the exporter(s) to point to a different location. The node settings that you added to this post suggest that it is using the local exporter, which keeps data in the same cluster. If someone overrode this in the cluster settings, then it may be going to a different cluster. This can be done dynamically, so it may explain a sudden change.

Hope that helps,
Chris

It looks like our cluster settings are empty:

{
  “transient”: {},
  “persistent”: {}
}

Hi @alexvollmer,

That's pretty normal. If X-Pack monitoring settings have not been messed with, then it's enabled by default and it uses the local exporter (routes documents to the same cluster).

The things left to check are number 2 from above, and:

GET /_cat/indices/.monitoring*?v

If this returns no indices, then this is likely the cause.

Thanks,
Chris

1 Like

That's the weird thing about this. When I check, there are monitoring indices:

[6626L02]: http prod-esw.smmt.io:8080/_cat/indices/.monitoring\*\?v                                                                           HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 640
Content-Type: text/plain; charset=UTF-8
Date: Mon, 26 Feb 2018 17:35:36 GMT
Server: nginx/1.12.1
content-encoding: gzip

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .monitoring-kibana-6-2018.02.26 ogapX5UWTtSzdY5DJmF0hg   1   1       6294            0      4.4mb          4.4mb
yellow open   .monitoring-kibana-6-2018.02.24 036cKPiiRPyBieEMnrVx_w   1   1       8638            0      3.1mb          3.1mb
yellow open   .monitoring-es-6-2018.02.26     OIRJnER6TmCALS3JrfgwOA   1   1    1628561        11557    993.5mb        993.5mb
yellow open   .monitoring-es-6-2018.02.21     GGLOm3neSZmbh1_eYRrl1Q   1   1    2131526         7544      1.2gb          1.2gb
yellow open   .monitoring-es-6-2018.02.22     BkJ2qy5GStCcBldS9cLRIg   1   1    2148853        15834      1.2gb          1.2gb
yellow open   .monitoring-kibana-6-2018.02.22 vR3e5kaoQTSTIFz49WCooQ   1   1       8639            0      3.1mb          3.1mb
yellow open   .monitoring-kibana-6-2018.02.20 MPRs3_zTQ2OzD1e9o_RQTw   1   1       8638            0      3.1mb          3.1mb
yellow open   .monitoring-es-6-2018.02.24     29vVnW5_SYqvWvqavXJfJg   1   1    2183491        15604      1.2gb          1.2gb
yellow open   .monitoring-kibana-6-2018.02.21 wUtoydc8TiKKjVbnaoh-WQ   1   1       8638            0      3.1mb          3.1mb
yellow open   .monitoring-es-6-2018.02.23     Pona51QKTHWRfXRHlgywNQ   1   1    2165927        12976      1.2gb          1.2gb
yellow open   .monitoring-kibana-6-2018.02.23 M0br5L1_SliA0lB8eI1GqA   1   1       8638            0      3.1mb          3.1mb
yellow open   .monitoring-es-6-2018.02.25     3KTwIplGSPmt4qvWyOlaOA   1   1    2203234         7023      1.2gb          1.2gb
yellow open   .monitoring-kibana-6-2018.02.25 xjaGQ18UTFCstZ89pkSg7g   1   1       8635            0        3mb            3mb
yellow open   .monitoring-es-6-2018.02.20     M6WK8Lg9Qr2rajtCsOwOWA   1   1    2113960        10538      1.2gb          1.2gb

The indices appear to be current, but it's as if Kibana just doesn't know about them. I'll take a look at the kibana.yml settings.

Thanks, let us know.

If Kibana is not seeing it, then that seems like the cause. There is a possibility that the monitoring indices are missing some data, but the index document counts look pretty consistent so I doubt it.

Chris

So here's another data-point. We run Kibana as Docker images. As an experiment I kicked off another Docker image of Kibana running 5.6.6 (same as our ES node). Sure enough, monitoring returned. Digging deeper into our existing Kibana setup we were running an older Kibana (5.4.3). Once I redeployed it with the newer version the monitoring returned.

Ah, that makes sense. There was a change in Monitoring 5.5+ that expected both sides to have 5.5+.

Glad that you were able to find the cause!

Chris

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.