Get [parent] Data too large from kibana logs

I get the following error logs and my kibana instance can not load index patterns now. could someone suggest what is happening on this error?

{"type":"error","@timestamp":"2019-07-16T05:35:01Z","tags":,"pid":1,"level":"error","error":{"message":"[parent] Data too large, data for [<http_request>] would be [6106612450/5.6gb], which is larger than the limit of [6103767449/5.6gb], real usage: [6106612040/5.6gb], new bytes reserved: [410/410b]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [6106612450/5.6gb], which is larger than the limit of [6103767449/5.6gb], real usage: [6106612040/5.6gb], new bytes reserved: [410/410b], with { bytes_wanted=6106612450 & bytes_limit=6103767449 & durability="TRANSIENT" }","name":"Error","stack":"[circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [6106612450/5.6gb], which is larger than the limit of [6103767449/5.6gb], real usage: [6106612040/5.6gb], new bytes reserved: [410/410b], with { bytes_wanted=6106612450 & bytes_limit=6103767449 & durability="TRANSIENT" } :: {"path":"/.kibana/_search","query":{"size":10000,"from":0,"_source":"index-pattern.title,namespace,type,references,migrationVersion,updated_at,title","rest_total_hits_as_int":true},"body":"{\"seq_no_primary_term\":true,\"query\":{\"bool\":{\"filter\":[{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"term\":{\"type\":\"index-pattern\"}}],\"must_not\":[{\"exists\":{\"field\":\"namespace\"}}]}}],\"minimum_should_match\":1}}]}}}","statusCode":429,"response":"{\"error\":{\"root_cause\":[{\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [6106612450/5.6gb], which is larger than the limit of [6103767449/5.6gb], real usage: [6106612040/5.6gb], new bytes reserved: [410/410b]\",\"bytes_wanted\":6106612450,\"bytes_limit\":6103767449,\"durability\":\"TRANSIENT\"}],\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [6106612450/5.6gb], which is larger than the limit of [6103767449/5.6gb], real usage: [6106612040/5.6gb], new bytes reserved: [410/410b]\",\"bytes_wanted\":6106612450,\"bytes_limit\":6103767449,\"durability\":\"TRANSIENT\"},\"status\":429}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":"?type=index-pattern&fields=title&per_page=10000","query":{"type":"index-pattern","fields":"title","per_page":"10000"},"pathname":"/api/saved_objects/_find","path":"/api/saved_objects/_find?type=index-pattern&fields=title&per_page=10000","href":"/api/saved_objects/_find?type=index-pattern&fields=title&per_page=10000"},"message":"[parent] Data too large, data for [<http_request>] would be [6106612450/5.6gb], which is larger than the limit of [6103767449/5.6gb], real usage: [6106612040/5.6gb], new bytes reserved: [410/410b]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [6106612450/5.6gb], which is larger than the limit of [6103767449/5.6gb], real usage: [6106612040/5.6gb], new bytes reserved: [410/410b], with { bytes_wanted=6106612450 & bytes_limit=6103767449 & durability="TRANSIENT" }"}

Are there any errors in your Elasticsearch logs? What is the output of the cluster state API?

it is green for the ES cluster.

{
"cluster_name" : "logs001",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 8,
"number_of_data_nodes" : 3,
"active_primary_shards" : 16,
"active_shards" : 33,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

1 Like

hmm. now kibana is back. could you suggest what does that mean in the kibana logs?

I am using Spark ES connector to dump data into the ES cluster, I got the similar error from the spark side as this ticket

But for that one, I can easily work around to restart the spark jobs.
For this one, just now I had to stop all traffic and cluster is back online.

It sounds like you are overloading Elasticsearch. What heap size do you have configured for your different nodes? What is the specification of the hosts?

2 client nodes: 6g java heap, 12GB memory, 2 core CPU
3 maser nodes : 6g java heap, 12 GB memory, 2 core CPU
3 data nodes: 16GB Java heap, 32GB memory, 2 core CPU . (1TB ssd attached for each node)

What does CPU usage look like on the data nodes when you are indexing and getting these errors? What bulk size are you using? How many concurrent indexing threads?

Thanks for the reply. could you suggest how I can check bulk size and concurrent indexing threads?
I am going to increase cpu cores first.

That is more of a Spark configuration question so can not help there.

What garbage collector are you using? I have observed similar error messages in the monitoring UI when using the G1GC garbage collector.

This is what I get from my ps output

/opt/jdk-11.0.1/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFracti

@Magnus_Kessler did you solve your issue by changing GC? Which version of JDK and which GC you would suggest from your observation?

I observed the issue in a docker setup, when after I had activated G1GC. The issue disappeared after I switched back to the old standard GC, which you have also active through -XX:+UseConcMarkSweepGC. Not sure I can help any further, as your case happens in a different sub-system.

1 Like

I am using Docker as well, but it is on k8s. I am also testing @Christian_Dahlqvist's suggestion about possible the system is getting overloaded as well.

The same issue is happening to me.
I have a server running Kibana 7.3 on OpenJDK11, Aws Ubuntu 18.04 instance with 4GB ram and with 2GB allocated to elasticsearch.
It is running the marvel monitor on a simple 3 node cluster also running 7.3
If I just watch the Kibana app monitoring summary screen for a few minutes it kills elasticsearch with the error as per the first poster.

All commands to elasticsearch fail at this point with circuit_breaking_exceptions.
kibana elasticsearch has the following limited indexes at that point.

green open .monitoring-es-7-2019.08.01 G0mV0MbkTkyenmcbx-_0IA 1 0 2535 1904  2.6mb  2.6mb
green open .kibana_1                   N47OPCEaQFyIpX1bn6X86w 1 0    4    0 15.3kb 15.3kb

Eventually the server recovers until I look at the screen again at which point it fails.
The server is completely new install that has only been running for 30 minutes.
There's practically nothing going on I can't imagine I should be having memory issues.

Making the change Magnus suggests seems to fix the issue.
i.e. XX:-UseG1GC , XX:+UseConcMarkSweepGC

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.