Increasing the heap size

Hi,
We are getting this error.

[2021-10-01 08:11:12] local.ERROR: ReportDataProcess Job Exception: {“error”:{“root_cause”:[{“type”:“circuit_breaking_exception”,“reason”:“[in_flight_requests] Data too large, data for [<http_request>] would be [4325617020/4gb], which is larger than the limit of [4294967296/4gb]“,”bytes_wanted”:4325617020,“bytes_limit”:4294967296,“durability”:“TRANSIENT”}]

I searched this error on Google and they recommend we should increase the heap size for a solution.
You recommend we should set the heap size into JVM options as %50 of Node's total memory.
For example;
if a node's Total Memory is 32 GB, I should set the heap size to 16 GB. So what if I set it to 20 GB? will it have a negative impact?

With the latest version of Elasticsearch, the HEAP is automatically defined by Elasticsearch so you should keep the default value.

Yes it could have a negative impact on performances as less filesystem cache will be available which means that more data will have to be read from disk which is slower.

But you have to test it with your use case.

Also check the query you are running. May be you are doing something "wrong"?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.