Is there a way to configure the batch size in the elastic search output plugin?
I have an Elastic Search server at AWS on nodes with a request max size set to 10MB. If the output plugin sends more than 10MB in a request, then I would lose data. Based on the documentation the batches are capped at 20MB which would cause problems for me.
According to this thread at AWS there is currently no way to configure this in AWS.
So the Elastic Search output plugin will not combine several processing batches when sending bulk inserts to Elastic Search? In other words, pipeline.batch.size would save the day. Correct?
I am referring to an event in Logstash, which will get turned into a JSON document. One thing that I guess could increase the size of the bulk request would be if you created new events as part of the batch, e.g. using ruby or the clone filter.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.