Is changing the jvm heap size a problem?

Hello, there are a total of 16 nodes in the cluster, including 13 data nodes and 3 master nodes.

Master nodes have 64 gb ram. But jvm heap is set to 16gb.

Data nodes have 64 GB ram. But jvm heap is set to 30gb.

Does the jvm heap value of all servers need to be 32 gb regardless of master node or data node? Also would it be a problem if I set the jvm heap to 32gb?

I'm getting an outofmemory error when transferring data with Logstash. Could this have something to do with the situation?

Java heap should be no larger than around 30 GB so that you benefit from compressed pointers. Increasing the heap to 32 GB would therefore be a mistake. Can you show us the full output of the cluster stats API?

Cluster/_stats API outputs

If you can not see the outputs just tell me

Other than Christian's comments, you can usually set master only nodes to use less heap, 8 gig should be heaps.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.