Recommendations for performance tuning of a server with 256G of RAM

Hello,

I have set up an ELK server which is running elasticsearch, logstash and Kibana (standalone), no other servers or replicas involved.
My server has 256 of RAM, can you please indicate the correct values to define for the heapsize for both elasticsearch and logstash? I understand that a general recomendation is to use half of the RAM available, however I don't know if that applies with a high amount of memory and if logstash and elastic are going to share the same Java heap size defined or if its independent.

Can you please share the valures to set up in variables Xms and Xmx in /etc/elasticsearch/jvm.options and /etc/logstash/jvm.options.
Thanks

Hi @mantis,

I suppose you have already read https://www.elastic.co/guide/en/elasticsearch/reference/6.7/heap-size.html

One recommendation that comes up often is

Don’t set Xmx to above the cutoff that the JVM uses for compressed object pointers (compressed oops); the exact cutoff varies but is near 32 GB.

I run Elastic Stack on machines that have ~250GB RAM. I have left about 50% for kernel file system caches. The remaining ~125GB I have spread across Kibana, Logstash and Elasticsearch. I actually run more than one Elasticsearch instances on these machines as there is plenty of RAM. E.g. 4xES with 25GB heap, 1xLogstash with 16GB heap and 1xKibana with 8GB.

I have both Xms and Xmx set to the same value listed above.

I run 4 Elasticsearch instances because I have many individual fast disks and use each disk as a dedicated storage drive per ES node.

Thanks A_B, do you know how to exactly determine what is the the cutoff that JVM uses for compressed object pointers, can it can be increased to 64 GB? How much RAM would you assign to ES if you had only one instance? Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.