We have a set of cold nodes for our logs whose utilisation is very very less, only a few developers send queries to these nodes. but because of our number of days we keep the logs in these cold nodes, the data keeps growing very fast(currently 9TB per node). Since the indexing doesn't happen on these nodes, do you recommend to increase the memory size from 31gb to 45gb, so to avoid constant memory pressure and memory circuit breaking for every request, which essentially is making the node useless.
Thanks
Elasticsearch Version: 6.2.4
Shards Per Node: 1500
Data Per Node: 9TB
It sounds like you have an average shard size around 6GB, which is quite small. I would recommend you watch this webinar and read this blog post. If you are able to upgrade to version 6.8 you may also want to look into using frozen indices on these cold nodes. This new feature is also described in this blog post.
Frozen indices sound perfect for your use case and although they are not included in the purely open-source distribution they areincluded in the basic license which is free from cost.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.