Memory management for elasticsearch

Hi Team,

I am trying to calculate good balance of total memory in three node es cluster.

If I have three node e.s cluster each with 32G memory, 8 vcpu. Which combination would be more suitable for balancing memory between all the components? I know there will be no fixed answers but just trying to get as accurate as I can.

different elasticsearch components will be used are beats (filebeat, metricbeat,heartbeat), logstash, elasticsearch, kibana.

most use case for this cluster will be, application logs getting indexed and running query on them like fetch average response time for 7 days,30 days, how many are different status codes for last 24 hrs, 7 days etc through curl calls, so aggregation will be used and other use case is monitoring, seeing logs through kibana but no ML jobs or dashboard creation etc..

After going through below official docs, its recommended to set heap size as below,

logstash -

The recommended heap size for typical ingestion scenarios should be no less than 4GB and no more than 8GB.

elasticsearch -

Set Xms and Xmx to no more than 50% of your total memory. Elasticsearch requires memory for purposes other than the JVM heap

Kibana -

I have't found default or recommended memory for kibana but in our test cluster of single node of 8G memory it is taking 1.4G as total (256 MB/1.4 GB)

beats -

not found what is the default or recommended memory for beats but they will also consume more or less.

What should the ideal combination from below?

  1. 32G = 16G for OS + 16G for Elasticsearch heap.
    for logstash 4G from 16G of OS, say three beats will consume 4G, kibana 2G
    this leaves OS with 6G and if any new component has to be install in future like say APM or any other OS related then they all will have only 6G with OS.

Above is, per official recommendation for all components. (i.e 50% for OS and 50% for es)

  1. 32G = 8G for elasticsearch heap. (25% for elasticsearch)
    4G for logstash + beats 4G + kibana 2G
    this leaves 14G for OS and for any future component.

If we install elasticsearch on all three nodes, and logstash and kibana on two nodes then the remaining node will have less memory consumption but question still remains for the first two nodes that will have all these components.

I am missing to cover something that can change this memory combination ?

Any suggestion by changing in above combination or any new combination is appreciated.

Thanks,

It depends.

Beat shouldn't need GB, Kibana and Logstash a few, and Elasticsearch can be up to 50% of available. I would just start with 8GB for Elasticsearch and then let the rest use what it needs.

@warkolm, Thanks for reply. Don't you think as per your reply on 50% for elasticsearch, it should be 16G instead of 8G from start?

Given you're running multiple other processes on the same host, and unless you are immediately looking at large volumes of ingestion, no.

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.