Move to pipeline-to-pipeline communiction

Hi,

Today we have a pretty complicated single pipeline comprised mostly of a lot of conditionals to route everything to its right place. One issue with this that we've noticed is if a output goes down, it affects the whole pipeline.

What we want to do is the following:

Send from logstash to two different elasticsearch endpoints, but the same documents.

Today an example output looks like this:

    else if "syslog" in [tags] and "sql_successful" in [tags] and [cendotServiceName] == "service-audit" and [vd] == "vd01" {
       elasticsearch {
                                    ilm_enabled => true
                                    ilm_rollover_alias => "daily"
                                    ilm_pattern => "000001"
                                    ilm_policy => "SIZE-30_AGE-30_DEL-365"
                                    hosts => ["10.229.1.12:9200", "10.229.1.13:9200"]
                                    user => logstash_internal
                                    password => password
                    }
       elasticsearch {
                                    #manage_template => false
                                    hosts => ["192.168.1.10:9200"]
                                    index => "logs_write"
                                    user => "admin"
                                    ilm_enabled => "false"
                                    password => "password"
                                    ssl => true
                                    ssl_certificate_verification => false
                    }
    }

How would we go about changing this configuration to support a second pipeline? We basically want to move the "second" elasticsearch output to a new pipeline. I would imagine something like this?

    else if "syslog" in [tags] and "sql_successful" in [tags] and [cendotServiceName] == "service-audit" and [vd] == "vd01" {
           elasticsearch {
                                        ilm_enabled => true
                                        ilm_rollover_alias => "daily"
                                        ilm_pattern => "000001"
                                        ilm_policy => "SIZE-30_AGE-30_DEL-365"
                                        hosts => ["10.229.1.12:9200", "10.229.1.13:9200"]
                                        user => logstash_internal
                                        password => password
                        }
           pipeline { send_to => "second_pipeline" }
        }

Where "second_pipeline" just contains the following?

    output {
       elasticsearch {
                                        #manage_template => false
                                        hosts => ["192.168.1.10:9200"]
                                        index => "logs_write"
                                        user => "admin"
                                        ilm_enabled => "false"
                                        password => "password"
                                        ssl => true
                                        ssl_certificate_verification => false
                        }
    }

Anyone?

Your suggestion to use a second pipeline looks reasonable. It is basically the output isolator pattern. Did you have a problem when you tested it?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.