Multiple Output Sections to the same Elastic-Cluster

Hallo Community,

we are actually using logstash with a quite big pipeline for all data send from filebeat. All data goes through the pipeline and then there are multiple output sections which send the data to different indices within the same elastic-cluster.

Every output section looks like:

output {
  if [type] == "logstash" {
    elasticsearch {
      hosts                        => [XXXX]
      index                        => "logstash-%{+xxxx.ww}"
      ...
    }
  }
}

I wonder if this is the right way to do it.

Also when starting logstash it produces many logs like:

[2020-03-03T14:50:31,132][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//log01.XXX:9200", "//log02.XXX:9200"]}
[2020-03-03T14:50:31,134][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash:xxxxxx@log01.XXX:9200/, https://logstash:xxxxxx@log02.XXX:9200/]}}
[2020-03-03T14:50:31,134][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://logstash:xxxxxx@log01.XXX:9200/, :path=>"/"}
[2020-03-03T14:50:31,142][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://logstash:xxxxxx@log01.XXX:9200/"}
[2020-03-03T14:50:31,143][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://logstash:xxxxxx@log02.XXX:9200/, :path=>"/"}
[2020-03-03T14:50:31,189][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://logstash:xxxxxx@log02.XXX:9200/"}
[2020-03-03T14:50:31,191][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2020-03-03T14:50:31,192][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-03-03T14:50:31,193][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//log01.XXX:9200", "//log02.XXX:9200"]}

Is this normal?

Hi Daniel,

The logs look okay to me - the warning just tells you that the connection has been restored.

If you are always writing to ElasticSearch and even to the same cluster you might be better of with using a single output module and instead use a variable in the index name:
index => "%{application}-%{+xxxx.ww}"
Where application is the name of the field within the document containing the index name. This way, you can build the index name dynamically.

Best regards
Wolfram

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.