Logstash config reload is taking too much time and logs are being lost during reload

Hello,

We are using logspout to ship container logs to logstash from where logstash is filtering and sending to Elastic cluster hosted on elastic cloud. We have only one input in which all container logs are being pushed by logstash. According to the container stack we are filtering the data and sending to at least 30 different indexes.

We are managing Logstash machine using ansible playbook. Now when logstash auto reloads the config pipeline, it is discovering each output(although all are same but different index) and taking at least 30 mins to reload. For that period of 30 mins, logspout is unable to connect to input and thus it is not able to send that logs and we are losing those logs.

So it it possible to keep the input working during config reload?

Elaticserach version: 6.5.2 (Elastic cloud)
Logstash version: 6.5.2

That's not really possible according to my knowledge, since inputs are part of the configuration and it's all initialized at the same time.

What you can do is put a message queue system between logspout and logstash (like Kafka) so you can decouple the two processes.
So even if Logstash is down or restarting, Kafka will continue to accept messages and keep them for Logstash to consume as soon as it's back up.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.