Multiple Logstash pipelines outputting into same index

I have two Filebeat pipes inputting into Logstash. One parses out log errors that I actually care about from one service while the other takes each line in order to keep track of the health of another service that often crashes by just hanging indefinitely. Originally, I was only running the exceptions pipe which was working and filtering correctly.

End of last week, I added the second pipe. I used the pipelines.yml file. The Logstash log shows that both pipelines are initialized correctly at startup, shows that there are two pipelines running. They are running the inputs on separate ports as required. Filebeat side is also configured to run on the correct ports.

The problem is that they are outputting to the same index and now the filtering for the exception logs is being ignored. It's like the exception log config file is being completely disregarded except for the part about which port to receive from Filebeat on.

Is this a bug? Do I have something set up wrong? My configuration is extremely simple so I am doubly perplexed. I have included the configs below.

Config1:

input {
     beats {
         port => "5044"
     }
 }
 ...
 output {
     elasticsearch {
         hosts => ["http://****:9200"]
         index => "logtailing-%{[@metadata][version]}-%{+YYYY.MM.dd}" 
     }
 }

Config2:

 input {
     beats {
         port => "5043"
     }
 }
 ...
 output {
     elasticsearch {
         hosts => ["http://****:9200"]
         index => "releaserexceptions-%{[@metadata][version]}-%{+YYYY.MM.dd}" 
     }
 }

pipelines.yml:

  - pipeline.id: logtailing
    path.config: "../config/logtailing.conf"
  - pipeline.id: releaserexceptions
    path.config: "../config/releaserexceptions.conf"

This is my Logstash log so that you can see that it is obviously initializing the two pipelines correctly:

 [2019-01-04T12:46:26,256][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.0"}
 [2019-01-04T12:46:31,249][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"logtailing", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
 [2019-01-04T12:46:32,138][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://****:9200/]}}
 [2019-01-04T12:46:32,151][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://****:9200/, :path=>"/"}
 [2019-01-04T12:46:32,644][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"releaserexceptions", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
 [2019-01-04T12:46:32,670][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://****:9200/]}}
 [2019-01-04T12:46:32,672][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://****:9200/, :path=>"/"}
 [2019-01-04T12:46:32,741][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://****:9200/"}
 [2019-01-04T12:46:32,806][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
 [2019-01-04T12:46:32,811][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
 [2019-01-04T12:46:32,842][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://****:9200"]}
 [2019-01-04T12:46:32,869][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
 [2019-01-04T12:46:32,900][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
 [2019-01-04T12:46:33,461][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://****:9200/"}
 [2019-01-04T12:46:33,466][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
 [2019-01-04T12:46:33,466][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
 [2019-01-04T12:46:33,468][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://****:9200"]}
 [2019-01-04T12:46:33,474][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
 [2019-01-04T12:46:33,499][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
 [2019-01-04T12:46:33,813][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5043"}
 [2019-01-04T12:46:33,831][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
 [2019-01-04T12:46:33,903][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"logtailing", :thread=>"#<Thread:0x132b2cfb run>"}
 [2019-01-04T12:46:33,903][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"releaserexceptions", :thread=>"#<Thread:0xaf8b522 run>"}
 [2019-01-04T12:46:33,998][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:logtailing, :releaserexceptions], :non_running_pipelines=>[]}
 [2019-01-04T12:46:34,013][INFO ][org.logstash.beats.Server] Starting server on port: 5043
 [2019-01-04T12:46:34,013][INFO ][org.logstash.beats.Server] Starting server on port: 5044
 [2019-01-04T12:46:34,453][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.