Alright, in that case I'm not really sure why the cluster stats collector stopped working around the time of upgrade. When you performed the upgrade, did the elected master node change? The cluster stats collector only runs on the elected master node so perhaps this has something to do with it. However, there are other collectors that run only on the elected master node so I'm not sure why only the cluster stats collector would be impacted.
Perhaps we could try to restart collection and see if that fixes the issue. Please try the following steps next:
-
Stop all monitoring collection by running the following query against Elasticsearch:
PUT _cluster/settings { "persistent": { "xpack.monitoring.collection.enabled": false } }
-
Wait about 20 seconds. Re-run the query with the long output that you ran earlier. Verify that the
timestamp
s in the output are at least 20 seconds old. This will confirm that collection has indeed stopped. -
Start up collection again:
PUT _cluster/settings { "persistent": { "xpack.monitoring.collection.enabled": true } }
-
Wait about 20 seconds. Re-run the query with the long output that you ran earlier. Verify that the
timestamp
s in the output are current (or within the last 10 seconds). This will confirm that collection has indeed re-started. Especially verify that thetimestamp
nested inside the object with"key": "cluster_stats"
is current. -
If all timestamps are current, visit the Kibana Monitoring UI and check if that's working again.
-
If all timestamps are not current, check the Elasticsearch master node's logs for any errors and post them here.
Thanks.