I'm trying to use logstash to tail a log file, parse it, and both measure the stats on how many times a particular type of line shows up per minute and send that stat to graphite, and also send all parsed lines to elasticsearch.
I'm running into an issue where if I include the elasticsearch output, the metrics being sent to graphite are no longer being sent at the right intervals.
Here is an example config I have that has names altered:
input {
file {
path => "/data/log/example.log"
codec => plain
sincedb_path => "/data/logstash/.sincedb"
sincedb_write_interval => 5
}
}
filter {
grok {
match => { "message" => "^%{TIMESTAMP_ISO8601:ts} %{WORD:type}" }
overwrite => [ "message" ]
}
if [type] == "some_type" {
grok {
match => { "message" => "^some_message" }
add_tag => [ "interesting_logline" ]
}
}
if "interesting_logline" in [tags] {
metrics {
meter => [ "logstash.interesting_logline" ]
clear_interval => 60
flush_interval => 60
add_tag => "metric"
percentiles => []
rates => []
}
}
}
output {
if "_grokparsefailure" in [tags] {
file { path => "/data/log/logstash/parsefailure.log" }
}
if "metric" in [tags] {
graphite {
host => "graphite-host"
port => 2003
include_metrics => [ "logstash.*" ]
fields_are_metrics => true
resend_on_failure => true
}
}
else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{+YYYY.MM.dd}"
flush_size => 1000
idle_flush_time => 10
workers => 1
codec => "json"
}
}
}
If I omit the elasticsearch output stanza, then I get metrics in graphite every 60 seconds like I expect. When I include it however, I only get metrics every few minutes sporadically. I've tried with logstash v2.3.2 and also v5.1.1, and both behave the same way. Is there something else I should be doing to get the metrics output to send consistently to graphite on a fixed interval?