Elasticsearch: Cold Nodes

Hi Guys,

We have a set of cold nodes for our logs whose utilisation is very very less, only a few developers send queries to these nodes. but because of our number of days we keep the logs in these cold nodes, the data keeps growing very fast(currently 9TB per node). Since the indexing doesn't happen on these nodes, do you recommend to increase the memory size from 31gb to 45gb, so to avoid constant memory pressure and memory circuit breaking for every request, which essentially is making the node useless.

Thanks

Elasticsearch Version: 6.2.4
Shards Per Node: 1500
Data Per Node: 9TB

It sounds like you have an average shard size around 6GB, which is quite small. I would recommend you watch this webinar and read this blog post. If you are able to upgrade to version 6.8 you may also want to look into using frozen indices on these cold nodes. This new feature is also described in this blog post.

but frozen indices seems to be a commercial feature, is there any open source alternative?

Frozen indices sound perfect for your use case and although they are not included in the purely open-source distribution they are included in the basic license which is free from cost.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.