Hello, there are a total of 16 nodes in the cluster, including 13 data nodes and 3 master nodes.
Master nodes have 64 gb ram. But jvm heap is set to 16gb.
Data nodes have 64 GB ram. But jvm heap is set to 30gb.
Does the jvm heap value of all servers need to be 32 gb regardless of master node or data node? Also would it be a problem if I set the jvm heap to 32gb?
I'm getting an outofmemory error when transferring data with Logstash. Could this have something to do with the situation?
Java heap should be no larger than around 30 GB so that you benefit from compressed pointers. Increasing the heap to 32 GB would therefore be a mistake. Can you show us the full output of the cluster stats API?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.