statusCode:429 - High fielddata memory usage

Hi,

We experienced some strange behavior from Kibana, while opening Kibana we got the following error:

    Error:
    {"statusCode":429,"error":"Too Many Requests","message":"[circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [4093269000/3.8gb], which is larger than the limit of [4063657984/3.7gb], real usage: [4093269000/3.8gb], new bytes reserved: [0/0b], with { bytes_wanted=4093269000 & bytes_limit=4063657984 & durability=\"PERMANENT\" }"}

We ran the command: "GET /_cat/fielddata?v&fields=*" and discovered that the problem was caused by a fielddata with high memory usage (3GB). This activated the circuit breaker and the errors in Kibana. The related node was also operating with high memory (95%). Clearing the fielddata memory solved the errors in Kibana and stabilized the node.

We still don’t know why the fielddata memory was that high and why it’s being used. We thought that by default the usage of fielddata was disabled. We also checked the index that contains the problem field, but I don’t see any mappings with “text” type fields.

Can someone help me out and clarify how the fielddata usage works and which factors can lead to the high memory usage.

It seems this question is more appropriate for the Elasticsearch Discuss hub - I'll forward it there.

What was the field with the high usage? Apart from text fields, a common culprit is the _id field, if you accidentally try and sort or aggregate by _id. A setting is in the works to allow you to reject searches that would cause this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.