Ingest node, what's the deal? It's required ?!

Dears,

Am I the only one struggling with ELK and ingest nodes since 5.x? Let's consider my simple ELK Test Cluster consisting of 3 nodes. Before I would assign them then the roles of master, data and client. Now, the client node has the ingest setting set to true.

I tend to start my master node first. However, that one is then constantly throwing error messages saying there are no ingest nodes in the cluster ! This error disappears when I start up my client node which also acts as ingest node.

Now, the thing is ... I don't do any kind of ingesting myself. So it seems that ELK since 5.x requires an ingest node to function, however, I don't find this in the documentation.

I'm not sure, but I even think a cluster without an ingest node simply doesn't work, so ignoring the error isn't an option.

So, is setting up an ingest node really a requirement?
As a result, it seems that since 5.x it's not possible anymore to have a CLEAN ELK cluster boot up sequence. Because, you first want to have a master, but this fails horribly without an ingest node. And you can't boot up an ingest node cleanly without first have a master to talk to.

Am I doing something wrong? Did I miss something in the docs?

That's happening I think because you have registered pipelines. If you don't have pipelines does this happen?

I haven't specifically defined pipelines and I don't see any mention of pipelines in the Elasticsearch yml files.

Interesting. Can you run a get cluster state query?

So ... I've shutdown my cluster completely and restarted my Master only.

I am then able to request infos from that node using curl such as server health, state, ...

The error I'm refering to is:

[2017-03-24T10:22:26,752][ERROR][o.e.x.m.AgentService     ] [mwo-monitor-master-node-d-el2213] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:148) ~[x-pack-5.2.2.jar:5.2.2]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.close(ExportBulk.java:77) ~[x-pack-5.2.2.jar:5.2.2]
        at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:183) ~[x-pack-5.2.2.jar:5.2.2]
        at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:196) [x-pack-5.2.2.jar:5.2.2]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulk [default_local]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.doFlush(LocalBulk.java:114) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flush(ExportBulk.java:62) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:145) ~[?:?]
        ... 4 more
Caused by: java.lang.IllegalStateException: There are no ingest nodes in this cluster, unable to forward request to an ingest node.
        at org.elasticsearch.action.ingest.IngestActionForwarder.randomIngestNode(IngestActionForwarder.java:58) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.ingest.IngestActionForwarder.forwardIngestRequest(IngestActionForwarder.java:51) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:135) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:82) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:173) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:145) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:87) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:75) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:64) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:67) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.xpack.security.InternalClient.doExecute(InternalClient.java:83) ~[?:?]
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:62) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.doFlush(LocalBulk.java:108) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flush(ExportBulk.java:62) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:145) ~[?:?]
        ... 4 more

Thank you for this detailled information.

So it's related to XPack. We are going to look at this internally. Thanks a lot!

Actually it's documented here: https://www.elastic.co/guide/en/x-pack/5.2/monitoring-settings.html#local-exporter-settings

Look at the use_ingest option.

xpack.monitoring.exporters.my_local:
  type: local
  use_ingest: false

I hope this helps

Thanks, I'll give that a try...

Typical issue with documentation, knowing where to look :wink:

Would be even better if such a situation could be automatically detected.

That being said, if you are starting to create dedicated master nodes, it means that you want to build a very nice cluster architecture and the recommendation is to have a dedicated monitoring cluster. In which case this won't happen.

1 Like

Hmm. I'm aware that there's an article on setting up a dedicated meta-monitoring cluster, but haven't had time yet (or a reason) to really dive into that.

My production cluster actually has 3 masters, 4 datas and 2 clients (coordinating).

But thanks for the info, I'll consider the meta-monitor cluster approach as well, but our knowing management and budgets ... :s

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.