Metrics filter + graphite output blocks pipeline during output failure

Our logstash is configured to read HTTP logs from redis, filter them, and output to elasticsearch. During filtering, we're using the metrics plugin to generate a count of events being processed by logstash. Here's a simplified version of our config:

input {
  redis { host => "redis.example.com" }
}
filter {
  grok { ...snip... }
  mutate { add_tag => "request" }
}
filter {
  metrics {
    meter => "events"
    add_tag => "metric"
  }
}
output {
  if "metric" in [tags] {
    graphite {
      host => "graphite.example.com"
      metrics => { "%{host}_#{PORT}.logstash.rate_1m" => "%{[events][rate_1m]}" }
    }
  }
}
output {
  if "request" in [tags] {
    elasticsearch { host => "es.example.com" }
  }
}

If the host graphite.example.com is down, the output pipeline is blocked for events tagged as "request" destined for the elasticsearch output.

I'm trying to understand how I can structure/configure this pipeline to ensure that events get sent to elasticsearch, and assume that metrics get sent to graphite. An issue with the latter should not impact the former.

My understanding of the logstash pipeline is that it is a single process and to achieve concurrency, either multiple agents or logstash instances must be used. Here, the metrics filter is dependent on the existing pipeline, so I can't pull it out into it's own agent -- can I?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.