Is Logstash executing config sequentially

Hi there.
I have two Elasticsearch clusters so Logstash has two output definitions for every index in the way like this:

if [type] == "iis"
{
	elasticsearch
	{
	hosts => ["NODE1:9200", "NODE2:9200"]
		index => "logstash-%{[type]}-%{+YYYY.MM}" 
		template_overwrite => true
	}
	elasticsearch
	{
	hosts => ["A-NODE1:9200", "A-NODE2:9200"]
		index => "logstash-%{[type]}-%{+YYYY.MM}"
		template_overwrite => true
	}

}

One node from first cluster run out of space so all indices became readonly.
I thought that second cluster should get logs, but it's not.

Is Logstash executing config sequentially, like source code? Is it correct that if any exception occur then all subsequent steps won't be executed?

Logstash pushes data to outputs synchronously so a problem with one output will affect all outputs.

May be there is a workaround except of deploying second logstash?

I don't think there is.

Another one question related to this problem?
Where does log go when beats or nxlog sent it to logstash but the last one can't restranslate it to ES?
Does logstash have some kind of buffer for this case?

Yes, if you enable the persistent queue.

But what could happen if persistent queues is not enabled?
Event will store in-memory until what?
If logstash service restarts, all queued events will be lost?

But what could happen if persistent queues is not enabled?
Event will store in-memory until what?

Don't count on the in-memory store. It's very small. If it gets full Logstash will stop accepting new events which hopefully makes the sending party stop pushing more data. I don't know how NXLog behaves in this case, but Filebeat will simple let the events rest in the log files, i.e. the original log files become the buffer.

If logstash service restarts, all queued events will be lost?

Yes.

Even if shutdown was safe like written here?Shutting Down Logstash | Logstash Reference [8.11] | Elastic

You're right, if the shutdown is graceful and Logstash is able to wait for the internal queue to drain nothing will be lost. But, if the queue is full because the outputs are clogged Logstash can't do anything unless you've enabled the persistent queue.

Thank you for your answers

I have another suggestion to check that is related to this thread.

Let's assume that logstash has two ES outputs. Connection with one of them is unstable.
Am I understand right that logstash will push events to BOTH ES synchroneously? So if one of outputs has unstable network connection then second output with good connection will not recieve events from logstash too. Is this correct?

Yes.

Is there any way to debug such behaviour, when events are stuck in logstash queue? Which logger responds for that?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.