Hello Experts,
I would like to know on what all nodes(client/master/data) should I enable the settings for http.max_content_length on elasticsearch.yml file.
Hello Experts,
I would like to know on what all nodes(client/master/data) should I enable the settings for http.max_content_length on elasticsearch.yml file.
I am not sure I understand the question. Can you elaborate please?
Why not keep the default setting here? What is the intention of changing this setting that cannot be achieved otherwise?
--Alex
Hi Alex,
I had a fluentd daemon set on my kubernetes cluster for collecting the logs, which buffers and pushes a max of 320MB chunk of data every 2s to ES cluster. I frequently get buffer overflow on fluentd end and it appears to be a bottleneck on the ES cluster side. So I believe that increasing the value from default 100MB makes sense. For this I suppose that I need to edit the elasticsearch.yml file of my nodes on ES cluster, but not sure if I need to do it to all my nodes(i.e master, data and client node) or only to the client node as they are the one who receives the request and then load balance.
I would recommend to instead send smaller chunks more frequently.
what Christian (hi! ) said.
The main question here is, why do you send 320 MB chunks in the first place? Where does that buffer overflow stem from? Why cant you send smaller chunks? Maybe trying to solve the problem at its root cause then dragging the issue across your logging stack will alleviate the issue at its core and also prevent network traffic peaks that may influence your main traffic as well.
Thanks, I will reduce the chunk size on fluentd config and try.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.