There is a confusion regarding the usage of pipeline.batch.size parameter in Logstash.
Can anyone please clarify?
From the documentations, got to understand that pipeline.batch.size is used to control the number of events passed from input to filters and output section of Logstash.
If the Logstash output is sent to AWS Opensearch, will the setting in pipeline.batch.size have an impact on the _bulk request? Or will it consider the default bulk limit of the AWS instance type?
Facing 2 issues now.
- Encountered a retryable error. Will Retry with exponential backoff code=>413 (Encountered a retryable error. Will Retry with exponential backoff code=>413)
- Suggested to reduce the batch size
- Because of less queue size in Filebeat and less batch size in Logstash, some of the loglines are not processed before the file gets deleted.
(FileBeat slow - Improve performance)
- Increased queue size and batch size based on this to handle the missing logs
I would like to get a common solution to handle both of the above issues.