ES is taking significant time to start

We are starting the ES process using the bat file in version 5.4

[2018-03-21T15:40:42,295][INFO ][o.e.n.Node ] [hxa_pUV] starting ...
[2018-03-21T15:46:20,944][INFO ][o.e.t.TransportService ] [hxa_pUV] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-03-21T15:46:23,981][INFO ][o.e.c.s.ClusterService ] [hxa_pUV] new_master {hxa_pUV}{hxa_pUVDSGqXgwSipL-sxw}{Lw8gBGtAQY-9r0ojPPB9rw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-03-21T15:46:24,025][INFO ][o.e.g.GatewayService ] [hxa_pUV] recovered [0] indices into cluster_state
[2018-03-21T15:49:17,086][INFO ][o.e.h.n.Netty4HttpServerTransport] [hxa_pUV] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2018-03-21T15:49:17,089][INFO ][o.e.n.Node ] [hxa_pUV] started

We started seeing this in a set of machines(not all) recently. And it seems to be pretty consistent in those machines.

Any pointers on how to debug/fix this ?

Thanks, Divya

1 Like

That is pretty slow. What are the resources on the node? What JVM?

The node(single) has no data. We are using 1.61 java.

Note: With the same machine configurations and Java versions its working fine for most of the people.

We recommend installing Java version 1.8.0_131 or a later version in the Java 8 release series.
I hope this is clearly mentioned in elk document.
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html
how it is possible that others instance is running.:thinking:

1 Like

Sorry about the typo: this is the exact java version we are using - "installed java version: 1.8.0.72"

So upgrade the JVM at least.

The six seconds from starting ... to publish_address are the problem here, the other three seconds are expected from discovery. Would you please start Elasticsearch with -E logger.level=debug and post all the log lines between starting ... to publish_address?

The issue seems to have occurred due to some upgrade of OS. Post an another update this got fixed. Thanks for the suggestions.

Next time if we encounter the same, I'll try to share the logs as suggested.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.