I'm seeing that if one of the output configuration is getting into an error, other outputs are also affected.
For example, I have one elasticsearch output and one syslog output. If the syslog output gets into an error state like syslog server not reachable, the elasticsearch also stops receiving the events.
if one output is not working, the logstash workers will queue the event waiting for the output to respond. As you have limited workers and each worker can only queue some messages, after some time logstash will have no free worker to work in the elasticsearch events, as all of then are locked, waiting for the syslog to return and finally flush the messages.
you can try to increase the works, but that will only postpone the problem. either fix the syslog, or create 2 logstash, one for elasticsearch and another for the syslog... or replace syslog by a message queue, like redis ou kafka, and then have a consumer that pickup the queued messages and sends it to syslog. this way if syslog dies, logstash can still work as the message queue will keep working and store the messages for later delivery
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.