{
"error": {
"root_cause": [
{
"type": "circuit_breaking_exception",
"reason": "[parent] Data too large, data for [<transport_request>] would be [1024631734/977.1mb], which is larger than the limit of [1003493785/957mb], real usage: [1024631512/977.1mb], new bytes reserved: [222/222b], usages [request=16440/16kb, fielddata=0/0b, in_flight_requests=495194/483.5kb, accounting=154193560/147mb]",
"bytes_wanted": 1024631734,
"bytes_limit": 1003493785,
"durability": "PERMANENT"
}
],
"type": "circuit_breaking_exception",
"reason": "[parent] Data too large, data for [<transport_request>] would be [1024631734/977.1mb], which is larger than the limit of [1003493785/957mb], real usage: [1024631512/977.1mb], new bytes reserved: [222/222b], usages [request=16440/16kb, fielddata=0/0b, in_flight_requests=495194/483.5kb, accounting=154193560/147mb]",
"bytes_wanted": 1024631734,
"bytes_limit": 1003493785,
"durability": "PERMANENT"
},
"status": 429
}
The parent circuit breaker trip in this log entry and in this case it means the existing JVM Heap usage level is already too high to be able to run a task which gets rejected to avoid Out-Of-Memory
Most likely given elasticsearch JVM Heap Space was only 1Gb in this case, you should increase JVM Heap Space further to a level where the is no longer memory pressure issues (up to maximum of 50% of RAM or 30Gb whichever is lowest)...
Hello @Julien,
I have 3-node cluster, each machine has 20GB RAM and JVM is set to 14GB, so I think it should be enough, but it is not.
So I think if I want to keep more data should I add another node?
First point with 20Gb RAM, maximum JVM Heap (Xms and Xmx) you should use is 10Gb (50% of RAM), and you probably want to use the smallest amount so if you have issue with 1Gb you can potentially try 2 or 4Gb to keep as much RAM as possible for the file system cache.
Per the error the parent circuit breaker is set to 957Mb, so if you have not customised it, this uses 95% of JVM Heap (or 70% for older versions or when you disable use_real_memory) and I assume the JVM Heap Space is currently configured at 1Gb.
You can verify this with ps -ef | grep java to verify which jvm.options are used on the current elasticsearch JVM process, or check with GET _nodes in Kibana Dev Tools console and look for -Xmx and -Xms
Hi @Julien,
I have only changed parameters in jvm.options, -Xms and -Xmx are set to 14g. GET _nodes shown that there are other-Xms1g and -Xmx1g but I don't know where they are. Maybe these values also should be changed?
I think if you changed in jvm.options and restarted elasticsearch node, you should be good, you can check using GET _nodes/stats/jvm the value for jvm.mem.heap_max_in_bytes
@Julien
Hello,
I've discovered something, even I change these parameters in java.options to "-Xms16g", "-Xmx16g", they don't work, elasticsearch is installed on Windows 2012.
I've checked the same on Linux and there is no problem.
How can I fix this problem?
There are multiple ways to install and run elasticsearch on Windows and I not sure which version of the installer you use. Assuming this is 7.10 and you used the MSI installer, you should have used SELECTEDMEMORY as the msiexec parameter.
Or check from the doc about zip install on windows :
To adjust the heap size for an already installed service, use the service manager: bin\elasticsearch-service.bat manager.
This is not a package created by Elastic so I have no knowledge of this package, so I would suggest to direct the question on how to size Elasticsearch JVM Heap to Bitnami (or check on their documentation/forums if it's already documented), or use one of the installer provided by Elastic that I referred to above
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.