Logstash thread model - workaround for multiple destinations

A while ago, I had created a thread regarding cloned events and delivery to two distinct Elasticsearch clusters.

As it turned out, the thread model of Logstash does not guarantee delivery of an event to its proper output destination in case it was cloned as part of the filtering process.
The solution suggested by @Christian_Dahlqvist was using Logstash pipelines.

How about the following workaround?

Config file: 1.beats_to_stdout

input {
  beats {
    port => 5044
    tags => ["beats"]
  }
}
filter {
  clone {
    clones => ["cloned"]
  }
}
output {
  stdout { codec => json }
}

Config file: 2.stdin_to_elasticsearch

input {
  stdin { }
}
filter {
  if "beats" not in [tags] { drop { } }
}
output {
  if "cloned" in [type] { elasticsearch { hosts => ["elastic-01.normal.site:9200"] } }
  else { elasticsearch { hosts => ["elastic-01.disaster.site:9200"] } }
}

In essence, we tag events coming from Beats, clone and push them to stdout.
Then, we read events coming from stdin, drop the untagged ones and distribute them to the proper Elasticsearch cluster (based on the type).

Any thoughts?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.