413 Request Entity Too Large - kibana error

Hi ,

Elastic Stack version: 7.2.0

We are using logstash to parse data and create custom fields. We rely on dynamic mapping to auto detect the new fields and add them to the index template. We have more than 1000 new fields added though the logstash pipeline. The data is indexed in elasticsearch successfully,however when we try to search the index in kibana discover tab we are getting 413 in kibana. Please see screenshot attached.

.

Status Code: 413 Request Entity Too Large

Also we can also see the following error in kibana logs.

"rejected execution of processing of [57480853][indices:data/write/bulk[s][
p]]: request: BulkShardRequest [[.kibana_1][0]] containing [index {[.kibana][_doc][index-pattern:metricbeat-*], **source[n/a, actual length: [1.1mb], max length: 2kb]}]** blocking until refresh, target allocation id: 
ur3EAwZhTU6jb0ixKb7ViA, primary term: 31 on EsThreadPoolExecutor[name = utilities-b3-7/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@429ef742[Running, pool size = 6, ac
tive threads = 6, queued tasks = 242, completed tasks = 43490935]]\\\"},\\\"status\\\":429}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:315:15)\n    at checkRespForFailure
 (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:274:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n    at IncomingMessage.
wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n    at IncomingMessage.emit (events.js:194:15)\n    at endReadableNT (_stream_readable.js:1103:12)\n    at process._tic
kCallback (internal/process/next_tick.js:63:19)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/api/saved_objects/index-pattern/metricbeat-*","path":"/api/saved_objects/index-pattern/metricbeat-*","href":"/api/saved_objects/index-pattern/metricbeat-*"},"message":"rejected execution of processing of [57480853][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.kibana_1][0]] containing [index {[.kibana][_doc][index-pattern:metricbeat-*], source[n/a, actual length: [1.1mb], max length: 2kb]}] blocking until refresh, target allocation id: ur3EAwZhTU6jb0ixKb7ViA, primary term: 31 on EsThreadPoolExecutor[name = utilities-b3-7/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@429ef742[Running, pool size =
 6, active threads = 6, queued tasks = 242, completed tasks = 43490935]]: [remote_transport_exception] [utilities-b3-7][10.2.64.7:9300][indices:data/write/update[s]]"}

We have increased the server.maxPayloadBytes to "2097152" in kibanal.yml , http.max_header_size to "2mb" in elasticsearch.yml and client_max_body_size to "2M" in nginx.conf but kibana is still returning "413 Request Entity Too Large" response .
Response on browser

<html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.7.8</center>
</body>
</html>

Anyone has any recommendations on how to fix this issue. Any help would be greatly appreciated.

1 Like

I think there are a couple of things you could try. First, you can change how many documents Discover requests by going to advanced settings, and setting the discover:sampleSize parameter:

That'll probably solve it. But another thing you could do is, if there are some really big fields in your documents, you can create a source filter in your index pattern to remove the offending fields from the index pattern. Discover will then refrain from loading those fields.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.