I'm using filebeat to send raw JSON documents (one per line) to a hosted (cloud.elastic.co) ES cluster.
The max request size is not configurable server-side, and despite tweaking the batch size in the elasticsearch config in filebeat, I can't find a good balance.
When filebeat encounters a "request too large" error, it simply retries forever, never successfully sending up anything at all. When I kill the process, it restarts and presumably the batch it prepares is different, and things work again, at least for a time.
Is there some filebeat config I can do to remedy this?
Sorry, I didn't see the reply. Yes, but if it's too low, the throughput suffers. It's the occasional very large set of requests that throws things off.
If there was a bulk_max_size_bytes, that would be perfect, of course, but it's a number of items.
The client (filebeat) uses a max number of items, and the server (elasticsearch) use a max request size.. I think that's the issue?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.