Logstash Ouptut to Elasticsearch when one node is down

Logstash sending output to three ES nodes. When one ES node is down, Logstash doesn't forward specific outputs to other nodes, probably because the dead node was receiving the output. For example, in the Logstash conf below, when one elasticsearchhost1 node is down, I don't see traffic going to the dev-internal or test-external index but the other index receives traffic. When elasticsearchhost2 is down, I don't see traffic going to let's say test-internal index. When all nodes are up, I see all output in Kibana.
Is there a way to prevent this behavior?

input {
  beats {
    port => "5044"
    host => "0.0.0.0"
  }
}


filter
{

        if "beats_input_codec_plain_applied" in [tags] {
        mutate {
            remove_tag => ["beats_input_codec_plain_applied"]
        }
    }

}
output {
if "test-int" in [fields][environment]
{

          elasticsearch {
                hosts => ["elasticsearchhost1:9200", "elasticsearchhost2:9200", "elasticsearchhost3:9200"]
                cacert => "path/to/cert"
                index => "test-internal-%{+yyyy.MM.dd}"
		        user => "elastic" 
		        password => "secret"		
          }
 }
else if "test-ext" in [fields][environment]
{
                elasticsearch {
                hosts => ["elasticsearchhost1:9200", "elasticsearchhost2:9200", "elasticsearchhost3:9200"]
                cacert => "path/to/cert"
                index => "test-external-%{+yyyy.MM.dd}"
		        user => "elastic" 
		        password => "secret"		
          }
}
else if "dev-int" in [fields][environment]
{
                elasticsearch {
                hosts => ["elasticsearchhost1:9200", "elasticsearchhost2:9200", "elasticsearchhost3:9200"]
                cacert => "path/to/cert"
                index => "dev-external-%{+yyyy.MM.dd}"
		        user => "elastic" 
		        password => "secret"
}
else if "dev-ext" in [fields][environment]
{
                elasticsearch {
                hosts => ["elasticsearchhost1:9200", "elasticsearchhost2:9200", "elasticsearchhost3:9200"]
                cacert => "path/to/cert"
                index => "dev-external-%{+yyyy.MM.dd}"
		        user => "elastic" 
		        password => "secret"
}

else
{
                elasticsearch {
                hosts => ["elasticsearchhost1:9200", "elasticsearchhost2:9200", "elasticsearchhost3:9200"]
                cacert => "path/to/cert"
                index => "iis-%{+yyyy.MM.dd}"
		        user => "elastic" 
		        password => "secret"
          }
}
 stdout { codec => rubydebug }
}


What version are you on?
It should realise that the node is unreachable and then try another one.

ES running on version 7.10.2 for all nodes.

For Logstash, two nodes are running 7.10.2 and one node is running 7.6.2(version upgrade in progress)

Kibana version on 7.10.2 on all three nodes.

Could it be the version mismatch ?

Not on the Elasticsearch side, more on the Logstash side as there has been work done on that output plugin.

Could you please explain this comment? Did you mean there have been changes to Logstash output plugins?

It only applies if you were running an older version of Logstash, which you are not.

Yeah that was what I thought. I am also experiencing another behavior with Logstash. In Logstash logs, i am getting below errors.

Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down

However, I can confirm Elasticsearch is up all the time because there are beats from other environments shipping to Elasticsearch directly without going through Logstash. These logs look good and there are no delays.

Any thoughts?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.