ES HEAP size allocation

Hi folks,

Recently I installed an ES monitoring plugin Cerebro on an ES client node.

After looked at the overview of the whole cluster health status. I found there is half ES data node's heap size show in red flag.we have 20 ES data nodes in a cluster. a half of them have 64GB memory, I assigned 30GB to heap size, and the other half of them have 48GB memory, I assigned 16GB to heap size, the problem appeared on the heap size with 16GB, they're used more than 80% of the heap size.

what's the best suggestion to assign the heap size to all of those servers with 48GB mem?

To assign heap size 24GB or 30GB to the servers with 48GB mem?

Thanks in advance

No more than 24gb then.

But the question is may be to understand why you need so much heap first?

hi @dadoonet,

Cuz I just follow ES official web doc, as there is detail explanation of how to set heap size in term of total mem that server have.normally set an half size of total mem for heap size. but never over 32GB even if there is bigger mem within a server.

We're using ES cluster for collect all kind of monitoring metrics via metricbeat or customize beats. and our company needs to keep 1 years online metrics.

By the way, I take one of our ES example of our prod config file, would you please have a look and see if it fesible?

a phsical server specs

CPU:32 MEM:64GB

# ---------------------------------- Indexing-----------------------------------

#The field data cache is used mainly when sorting on or computing aggregations on a field.
indices.fielddata.cache.size: 75%

#The field data circuit breaker allows Elasticsearch to estimate the amount of memory a field will require to be loaded into memory
indices.breaker.fielddata.limit: 85%


#caching the results of queries take percentage of total heap size
indices.queries.cache.size: 60%

#memory index take percentage of total heap size
indices.memory.index_buffer_size: 20%

#min memory index
indices.memory.min_index_buffer_size: 3000m

#max memory index
indices.memory.max_index_buffer_size: 6000m

#query cache
indices.requests.cache.size: 30%



# ---------------------------------- Pool-----------------------------------
#<int((cpu_core_nums*3)/2)+1>
thread_pool.search.size: 49

#fixed number: 1000
thread_pool.search.queue_size: 1000

#cpu_core_numbs
thread_pool.bulk.size: 32

#fixed number: 200
thread_pool.bulk.queue_size: 200

#cpu_core_numbs
thread_pool.index.size: 32

#fixed number: 200
thread_pool.index.queue_size: 200

please feel free to comments here if anyone has a good suggestion.

Regards

No more than 24gb then.
==>how about 23GB for heap size for the server with 48GB mem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.