Hi, I have problem with Self-Monitoring in Elasticsearch 7.17.3.
It worked correctly, but I changed Stting parameters with
PUT 'http: // server:9200 /_settings
After I trying to fix it with:
PUT http :// server:9200 /_settings
Filebeat/Heartbeat was restored, but self-monitoring is still not available.
I found one topic with issue like my and solution was to remove index template (monitoring-es), but it's a system template and I afraid to delete it )
How can i fix this issue? How can I find what is problem witn Self Monitoring?
Elasticsearch 7.17.3 cluster with 3 data-nodes and 1 Kibana
What do your Elasticsearch logs show?
[2022-05-13T00:00:00,374][WARN ][o.e.x.m.MonitoringService] [tb-elk001.bee.mobitel.local] monitoring execution failed
org.Elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
at org.Elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:110) [x-pack-monitoring-7.17.3.jar:7.17.3]
at org.Elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:144) [Elasticsearch-7.17.3.jar:7.17.3]
Caused by: org.Elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulk [default_local]
... 101 more
Caused by: java.lang.IllegalStateException: There are no ingest nodes in this cluster, unable to forward request to an ingest node.
at org.Elasticsearch.action.ingest.IngestActionForwarder.randomIngestNode(IngestActionForwarder.java:52) ~[Elasticsearch-7.17.3.jar:7.17.3]
at org.Elasticsearch.action.ingest.IngestActionForwarder.forwardIngestRequest(IngestActionForwarder.java:42) ~[Elasticsearch-7.17.3.jar:7.17.3]
at org.Elasticsearch.action.bulk.TransportBulkAction.doInternalExecute(TransportBulkAction.java:244) ~[Elasticsearch-7.17.3.jar:7.17.3]
... 92 more
This would be worth checking. You can use the
_cat/nodes?v API to check what roles your nodes have enabled.
Please don't post pictures of text, logs or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
x.x.x.223 58 98 0 0.16 0.18 0.12 dm - elk001
x.x.x.115 17 82 0 0.00 0.01 0.05 lr - kibana
x.x.x.224 36 97 0 0.10 0.04 0.05 dm * elk002
x.x.x.225 11 99 0 0.03 0.04 0.05 dm - elk003
Have you explicitly set the node roles in the configs?
node.roles: [ master, data ] - elk001
node.roles: [ data, master ] - elk002
node.roles: [ data, master ] - elk003
Problem was solved after setting up metricbeat monitoring.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.