[ERROR][logstash.outputs.elasticsearch][main][4662344eb1eeab4baf336e2996a14ddadf8c61b8943c6e31c68cb582d77f72de] Encountered a retryable error (will retry with exponential ba
ckoff) {:code=>413, :url=>"http://elasticsearch-server:9200/_bulk", :content_length=>121168835}
How can i change the value "http.max_content_length: 200mb" on the whole cluster?
Thanks.
this can only be configured statically in the configuration file or via properties on startup. Is there any chance to send smaller bulks? Out of curiosity: Why did you pick this value? Did you pick that value based on testing?
You suggest to leave this value by default. And look in the direction of reducing the input data from logstash.
Because in the future it will affect the performance of the cluster?
Also set this value. Because the data did not go to Elasticsearch. After restarting logstash, the data was uploaded correctly.
And this setting «http.max_content_length: 200mb» should be specified on all nodes of the cluster (master, data) or only on those where logstash output looks?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.