Memory or heap size

What is ideal heap size of elastic search for nearly millions of records

That is an unanswerable question. The minimum heap size to process an arbitrarily large number of records is probably a couple of hundred megabytes, maybe half a gigabyte. Adding more memory may improve performance, but typically there are engineering trade-offs as increased memory can lead to increase GC costs.

The minimum will also depend on what filters you are using. Some filters (translate, dns, jdbc_static, etc.) cache data in memory, and the amount of memory will depend on the amount of data.

Configure your filters, configure your heap, and benchmark it. If adding more memory improves performance and you can afford to add the memory then add it.

1 Like

Thank you for your response.what is the maximum size we can set for heap size in elastic search?

No. The same applies. Test it and see what works best.

Also can u pls tell the maximum or threshold for setting heap size and am having the single node cluster

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.