Old question but at a loss http_request which is larger than the limit of [*****/4.1gb]

Hello,
i already increased Java Heap sizes for graylog-server which is 4.3.13 and elasticsearch . According to the nodes overview, my settings for graylog of 8/12 gb are not filled. Elasticsearch can do searches (sometimes ) and usually has at least 20g on any of the 4 nodes. Nevertheless my Logfiles gets flooded with those errors, and i dont know where i would be able to rise the 4.1 Gb limit ?

thanks for hints

[parent] Data too large, data for [<http_request>] would be [4449250311/4.1gb], which is larger than the limit of [4448655769/4.1gb], usages [request=0/0b, fielddata=1012822694/965.9mb, in_flight_requests=594083/580.1kb, accounting=3435833534/3.1gb], errorDetails=[[parent] Data too large, data for [<http_request>] would be [4449250311/4.1gb], which is larger than the limit of [4448655769/4.1gb], usages [request=0/0b, fielddata=1012822694/965.9mb, in_flight_requests=594083/580.1kb, accounting=3435833534/3.1gb]]}

ok turns out the messages were caused by node 3 which has 26gb elasticsearch heap, i restarted elasticsearch. i found out by having seen its missing from assigned shards overview
curl -X GET localhost:9200/_cat/allocation?v&pretty

What is the output of:

GET /
GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/indices?v

If some outputs are too big, please share them on gist.github.com and link them here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.