Logstash OutOfMemoryError

I've just started using logstash 2.2.2 with 3 hosts uploading a combined average of 20,000 events per day. Using Filebeat 1.1 to forward events. Typically after about 12 hours I will get the OutOfMemoryError in the logstash log. I've increased the HEAP size to 2GB and it tends to last a little more than 24 hours.

What is a typical logstash heap size and where is the best place to start troubleshooting this?

Can you provide your logstash-config? What are you startup parameters (filterworker, batch-size)?

Maybe you can start Logstash with jmx-parameters to analyze the Memory-Usage.

I have copied the output statement below. It turns out to be related to using an IF statement in the output section of logstash. I wanted to have separate indexes based on the source type (set as a the prefix "myapp"), however when this put in place, I get what appears to be a memory leak. When I remove this and go with a static index name, logstash does not run out of memory.

Is there a better way to separate indexes based on type?

if [type] =~ /^myapp-/ {
elasticsearch {
hosts => ["localhost"]
sniffing => true
manage_template => false
index => "myapp-filebeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
else {
elasticsearch {
hosts => ["localhost"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
stdout { codec => rubydebug }
}

I think it's the sniffing-parameter:

This has been fixed in version 2.5.3.Many thanks to @jsvd and @cheald.

To install this you can do:

bin/plugin install --version 2.5.3 logstash-output-elasticsearch

After upgrading logstash, all is well.