Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [preallocate[aggregations]] would be [4143070784/3.8gb], which is larger than the limit of [4080218931/3.7gb], real usage: [4143064640/3.8gb],
Hello, there is a total of 1 node in the cluster. 1 master node only.
The master node has 64 GB ram. But the JVM heap is set to 4 GB.
I've read the alternatives like you said but can you please tell me what is the right way?
I tried changing it to
-Xms8g
-Xmx8g
in my local VM ELK setup but the Elastic service failed to start.
Should I change the jvm.options file and how?
Any other option for that but how?
Please guide if you know the solution
Is there anything else running on the VM? If the VM has 64GB of RAM and only hosts Elasticsearch there should be no problem.
The heap should be set to no more than 50% of the RAM available to Elasticsearch. With a 8G heap Elasticsearch requires 16GB of RAM to be available to it.
Oh yeah that's right nothing like another heavy tool running on the server having 64 GB of RAM
but what I meant earlier was that before executing these settings on my Actual ELK Server I tried it on a replica VM machine to test if it works and it failed.
So with 64 GB of RAM let's say I want to set JVM heap to 16 GB
what would be the steps I should take? what file and setting should I modify and in what manner?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.