Multiple Elasticsearch Output Pipeline

Hi folks,
we have a pipeline with beats input and multiple ES Hosts outputs.
But if one of the ES Hosts unreachable or the index is full, all other Outputs also doesn´t get data.
Is that normal, or a misconfiguration from our side?

In the log we get only permanent warnings that one of the ES Hosts is not reachable.

Our Setup:
Kubernetes Cluster
Elastic Helm Chart
Beats/Logstash/ES Version 7.6.1

Config:

     pipelines.yml: |
        - pipeline.id: beats-server
          config.string: |
            input { beats { port => 5044 } }
            output {
              if [kubernetes][namespace] == "bvsecure-trusted" {
                 pipeline { send_to => bvsecure_trusted }
              } else if [kubernetes][namespace] =~ "payments" {
                 pipeline { send_to => payments }
              } else {
                 pipeline { send_to => fallback }
              }
            }
        - pipeline.id: bvsecure-trusted-processing
          config.string: |
            input { pipeline { address => bvsecure_trusted } }
            output {
              elasticsearch {
                  hosts => ["estrusted:9200"]
                  password => "XXX"
                  user => "XXX"
                  manage_template => false
                  index => "%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
                  ssl => true
                  cacert => "/etc/logstash/config/certs/elastic-ca.pem"
              }
            }
        - pipeline.id: payments-processing
          config.string: |
            input { pipeline { address => payments } }
            output {
              elasticsearch {
                  hosts => ["espayments:9200"]
                  password => "XXX"
                  user => "XXX"
                  manage_template => false
                  index => "%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
                  ssl => true
                  cacert => "/etc/logstash/config/certs/elastic-ca.pem"
              }
            }
        - pipeline.id: fallback-processing
          config.string: |
            input { pipeline { address => fallback } }
            output {
              elasticsearch {
                  hosts => ["esdelivery:9200"]
                  password => "XXX"
                  user => "XXX"
                  manage_template => false
                  index => "fallback-%{+YYYY.MM.dd}"
                  ssl => true
                  cacert => "/etc/logstash/config/certs/elastic-ca.pem"
              }
            }

thanks
marc

Yes. You could use pipeline to pipeline comms with an output isolator pattern to resolve it.

Hi Badger,
thank you very much
Marc

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.