I've logstash running on AWS EC2 instances, which I am using to push data
to some ElasticSearch instances. Occasionally, I notice a few messages
getting lost & not being published to ElasticSearch. I am using a very
basic configuration, with 1 input thread that reads from a log file locally
& 1 ElasticSearch output worker that publishes the data.
Reading through the logstash documentation tells me, that the failed
messages are retried (for configurable number of times), if they fall under
the "retryable errors" bracket. After all retries are done, the failed
messages are logged in stderr.
Is there any way I could configure the error output file location?
Also, does the elasticsearch output plugin buffer the messages & send them
in batches? If yes, is it possible to configure the batch size?
(FYI, I am starting logstash as a service using Chef)
Thanks in advance.