Elasticsearch 1.1.1 going down with addition of new indices

We have 4 nodes with 16gigs.
With addition of new indices we see OOM errors and nodes going down 1 by 1.
I believe this is Gazillion shards problem. In search of evidence that supports it with details like how much overhead a single shard is taking up.
Also if we can tweak heap, jvm, network, io, etc before adding one more node into cluster.
Please suggest.

A lot of shards will surely not help, but you are also not able to benefit from off-heap memory in the form of doc_values as you are on such an ancient version. I would recommend upgrading, at least to 1.7.6.

Thanks Christian for the reply.

Can you please direct to some document which have more details on off-heap memory usage of doc_values ? How to set it up? How much memory pressure will it take off of jvm?

It has been a long time since I looked for documentation around the introduction of doc_values, as they are nowadays the norm. It has been available since version 1.0, but I recall performance was improved a lot late in the 1.x series, which allowed them to be made the default choice for a lot of field types in more recent versions. Have a look at the following old blog posts though:

It is impossible to estimate the benefit as it will depend a lot on the use case.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.