Logstash 2.0.0 Running Out of Memory

I'm using Logstash 2.0.0 to parse and index some Nginx access logs received through remote FileBeat. My configuration is very simple (shown below). But every now and then Logstash runs out of memory and crashes. It seems to happen every 24 hours or so. I've seen several similar reports here and elsewhere, but with no specific workaround mentioned. Any thoughts on what I can try to get this sorted out?

input {
beats {
port => 5044
}
}

filter {
grok {
patterns_dir => "./patterns"
match => { "message" => "%{NGINX_ACCESS_LOG}" }
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
}

output {
elasticsearch {
hosts => "localhost:9200"
sniffing => true
manage_template => false
index => "nginx"
document_type => "%{[@metadata][type]}"
}
stdout {}
}

Unfortunately this problem keeps happening. Is this a bug in the latest Logstash?