Weird timeouts with transport client after re-indexing

Hi everyone,
We have a pretty limiting issue with our elasticsearch cluster. We have 3 master nodes, 3 data nodes and 2 client nodes in our cluster.
Our code creates time based indexes every half an hour. This is meant to simplify time based expiry, and address some other design requirements/constraints in the system. We are looking at making the indices bigger (maybe a day), but need to resolve our issues in the present stack that seem unrelated to the index design.

It all started with having around 2500 indices with the default config for num_of_shards (5). This made our queries incredibly slow. Based on the recommendation from ES support, we re-indexed with the settings changed in our index template to make num_of_shards as 1, as there isnt a lot of data per index at the moment.

The re-index seemed to do the trick, as our query times came down from 15s to 5s with the change. We are however seeing several transport client delays that seem to be 8-10 times the actual ES query time (we printed the "took" value from the JSON response in the transport client to compare what time ES really took and when we got a response). ES takes 5s per the "took", but the transport client returns only after 25-45s.

We are seeing tons of un connected exceptions.
In the master we see:
2017-07-20 00:56:24,054][WARN ][discovery.zen.publish ] [staging-1-elasticsearch-main-master-1] timed out waiting for all nodes to process published state [4] (timeout [30s], pendi
ng nodes: [{staging-1-elasticsearch-main-data-1}{p5RtXb8HTR6UUK96t5nAuA}{}{}])
[2017-07-20 00:56:24,116][WARN ][cluster.service ] [staging-1-elasticsearch-main-master-1] cluster state update task [zen-disco-join(join from node[{staging-1-elasticsearch-ma
in-data-1}{p5RtXb8HTR6UUK96t5nAuA}{}{}])] took 30.2s above the warn threshold of 30s

In our transport client we see:
{"timestamp":"2017-07-20T04:03:49.004+00:00","message":"[Achilles] SSL/TLS handshake failed, closing channel: null","loggerName":"org.elasticsearch.shield.transport.netty","thread":"elasticsearch[Achilles][transport_client_worker][T#15]{New I/O worker #327}","level":"ERROR","className":"org.elasticsearch.common.logging.log4j.Log4jESLogger","methodName":"internalError","fileName":"","line":140}

We were seeing this:
[T#1]","level":"INFO","stack_trace":"org.elasticsearch.transport.ReceiveTimeoutTransportException: [][][cluster:monitor/nodes/liveness] request_id [4] timed out after [5000ms]\n\tat

but we changed the client timeout to a larger value, so this seems to have gone away.

On the ES Client node we see the following:
[2017-07-19 01:07:50,753][WARN ][netty.handler.ssl.SslHandler] Unexpected leftover data after SSLEngine.unwrap(): status=OK handshakeStatus=NEED_WRAP consumed=0 produced=0 remaining=7 data=15030300020100

Unsure what the cause of these errors are, and if re-indexing/process to re-index left the cluster in an unstable state.

Anyone run into this kind of an issue? Any pointers would be very helpful.


It sounds like you still have far too many indices and shards for a cluster that size. Switch to make each index cover a longer time period, and ideally adjust this so that the average shard size is between a few GB and a few tens of GB in size. You may also look into using the rollover index API to achieve this.

How many indices/shards do you currently have in the cluster? How much data does this correspond to? You can get this from the cluster stats API.

Hi @Christian_Dahlqvist, thank you for your prompt response!
We are actively working on changing our indexing design so our indexes are a lot fewer.
Our indexes are not very huge at all. For example a sample snip from our /_cat/indices looks like this:
indent preformatted text by 4 spaces
green open reports:1493744400000:new 1 1 38 0 53.9kb 26.9kb
green open reports:1499182200000:new 1 1 4652 0 2.7mb 1.3mb
green open reports:1498755600000:new 1 1 4096 0 2.4mb 1.2mb
green open reports:1500166800000:new 1 1 12511 0 7.3mb 3.6mb
green open reports:1498447800000:new 1 1 9660 0 5.4mb 2.7mb

Our cluster stats API output:
indent preformatted text by 4 spaces
"timestamp" : 1500529802789,
"cluster_name" : "elasticsearch-data",
"status" : "green",
"indices" : {
"count" : 2557,
"shards" : {
"total" : 5123,
"primaries" : 2561,
"replication" : 1.0003904724716908,
"index" : {
"shards" : {
"min" : 2,
"max" : 10,
"avg" : 2.0035197497066877
"primaries" : {
"min" : 1,
"max" : 5,
"avg" : 1.0015643332029722
"replication" : {
"min" : 1.0,
"max" : 2.0,
"avg" : 1.000391083300743
"docs" : {
"count" : 12307065,
"deleted" : 182806
"store" : {
"size_in_bytes" : 7730584630,
"throttle_time_in_millis" : 0
"fielddata" : {
"memory_size_in_bytes" : 63366832,
"evictions" : 0
"query_cache" : {
"memory_size_in_bytes" : 15959496,
"total_count" : 24917442,
"hit_count" : 340707,
"miss_count" : 24576735,
"cache_size" : 24121,
"cache_count" : 24568,
"evictions" : 447
"completion" : {
"size_in_bytes" : 0
"segments" : {
"count" : 19996,
"memory_in_bytes" : 375120286,
"terms_memory_in_bytes" : 202396598,
"stored_fields_memory_in_bytes" : 8407928,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 3720576,
"doc_values_memory_in_bytes" : 160595184,
"index_writer_memory_in_bytes" : 0,
"index_writer_max_memory_in_bytes" : 3740711594,
"version_map_memory_in_bytes" : 0,
"fixed_bit_set_memory_in_bytes" : 4585528
"percolate" : {
"total" : 0,
"time_in_millis" : 0,
"current" : 0,
"memory_size_in_bytes" : -1,
"memory_size" : "-1b",
"queries" : 0
"nodes" : {
"count" : {
"total" : 8,
"master_only" : 3,
"data_only" : 0,
"master_data" : 3,
"client" : 0
"versions" : [ "2.4.4" ],
"os" : {
"available_processors" : 34,
"allocated_processors" : 34,
"mem" : {
"total_in_bytes" : 109747888128
"names" : [ {
"name" : "Linux",
"count" : 8
} ]
"process" : {
"cpu" : {
"percent" : 0
"open_file_descriptors" : {
"min" : 313,
"max" : 11985,
"avg" : 4528
"jvm" : {
"max_uptime_in_millis" : 18222151,
"versions" : [ {
"version" : "1.8.0_112",
"vm_name" : "Java HotSpot(TM) 64-Bit Server VM",
"vm_version" : "25.112-b15",
"vm_vendor" : "Oracle Corporation",
"count" : 8
} ],
"mem" : {
"heap_used_in_bytes" : 14797480408,
"heap_max_in_bytes" : 99561635840
"threads" : 506
"fs" : {
"total_in_bytes" : 2408114774016,
"free_in_bytes" : 2385477734400,
"available_in_bytes" : 2263011090432,
"spins" : "true"
"plugins" : [ {
"name" : "license",
"version" : "2.4.4",
"description" : "Internal Elasticsearch Licensing Plugin",
"jvm" : true,
"classname" : "org.elasticsearch.license.plugin.LicensePlugin",
"isolated" : false,
"site" : false
}, {
"name" : "shield",
"version" : "2.4.4",
"description" : "Elasticsearch Shield (security)",
"jvm" : true,
"classname" : "org.elasticsearch.shield.ShieldPlugin",
"isolated" : false,
"site" : false
}, {
"name" : "cloud-gce",
"version" : "2.4.4",
"description" : "The Google Compute Engine (GCE) Cloud plugin allows to use GCE API for the unicast discovery mechanism.",
"jvm" : true,
"classname" : "",
"isolated" : true,
"site" : false
} ]

We understand that this will help us significantly improve query performance. However, the issue at hand seems to be some problem with the transport client. Reason we suspect that is that the query seems to be returning deterministically at 5s each time based on our curl measurement, as well as, the "took" time in the transport client JSON response (we can improve this with changing the indexing size). The problem is that even though ES says it ran the query in 5s, we get it back in 45. Does this not indicate that the index design related issues have been moved past?

As you have so many indices and shards, which most likely also results in a large cluster state, it looks like changes that require the cluster state to be updated are timing out, which can cause a lot of problems. Having that many shards is very, very inefficient and I suspect that the cluster will go back operating fine once this is resolved.

What is the optimal number of indices? We presently have 1 shard + 1 replica per index.
Changing our indexing will take some time. In the meantime, Is there no way to get these timeouts SSL handshakes to go away without changing the number of indices?

I am very puzzled if that is the only problem, as we previously had deterministic and much lower latencies before when we had 5 shards/index (It was 15s per query, still long, but the "took" value and when we got the response matched exactly)

Also, another basic question. The rollover index API looks interesting. What happens when the document _id are repeated in multiple indices that are getting rolled into one? Will these still be different documents inside the new index? Our ES version is 2.4, i dont see this API there, is there perhaps a manual way to do this for 2.4?

The optimal number of indices and shards depend a lot on the use case and the size of the cluster. As it seems like it is the size of the cluster state that is causing problems as updated take a long time, I would recommend reindexing into daily indices. This should if I read you stats right bring the number of indices down to just above 100, corresponding to just over 200 shards. If you are looking for a long retention period, you potentially could even go for weekly or monthly indices if the index size example you provided is typical.

At the moment you are generating a new index every 30 minutes, which causes a cluster state update which seems to be slow. I am not sure exactly what effects this can have on the cluster, but if everything was running fine with the current setup until you reached the current volume of shards, I would suspect your issues are linked.

Also check in the logs if there is anything reported around long GC that could be causing problems.

The rollover API was introduces in ES 5.x, so is unfortunately not available in the version you are using. I do believe the IDs generated by Elasticsearch should be unique even across indices, so you should not see any clashes when reindexing, but it may be better to get a second opinion on that from someone with more in depth knowledge around the ID generation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.