Hi
I have a cluster with 30 nodes, each node has 40 cpu cores and 18TB disk (raid 1), and 128GB of Memory on each of them.
I am indexing a huge number of documents (about 300 * 10^9).
Some one told me if I set heap size to 60GB, the performance of cluster will increase. So I changed it to 60GB and set GC to G1GC.
The cluster is doing well, but I am wondering if it is a good thing or not!
So, as I read in different documentations, the best size is about 32GB and changing it up to 40GB will decrease performance, What about setting it to 60GB or more?
First, see https://www.elastic.co/guide/en/elasticsearch/reference/7.4/heap-size.html
The reason for preferring compressed oops is the ability to use 32bit pointers, which at some point due to running out of address space require use of 64 pit pointers, which then means you need more address space for these to use, so that this address space cannot be used for your java application.
unless there is a reason to go beyond 30gb (or the compressed oops amount), I would start to monitor my systems (using stack monitoring and the nodes stats/info APIs) to find out, how much heap you really need on a day to day base - depending on your use-case, the amount of data per node, this might be much less and you are fine with smaller heaps, allowing more memory for the file system cache.
As I am ingesting a huge number of documents, 60GB are always 99% full.
I didn't understand that setting it to more than 30GB will decrease the performance or not?
I see:
node01 gc[1119590] overhead, spent [563ms] collecting is tha last [1.5s]
As I saw here if it goes more than 48GB, the performance will increase.
Is it right?
Should I still keep the heap size 50% of physical memory?
the ingestion of huge number of documents does not necessarily mean your heap is always 99% full. Elasticsearch claims all the available heap at start up, so from an operating system perspective it looks as if all the memory is taken, but that does not mean it is being used.
Also, more memory does not equal more performance like you mentioned in your last post.
Just to be clear, I am looking heap.percent from _cat/nodes.
As my hardware is strong enough, I have two choices now:
- Running two instances of EL, each one have 30GB of heap
- Run one instance like now and continue using 60GB for heap.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.