Hi,
I just created the following grok filter and it causes Logstash to go totally berserk. It produces lots of load until it ultimately stops working. Sometimes one or another logevent comes through but they keep piling up in the redis buffer I installed between pipelines. The host is a CentOS 7 with openJDK 8, 2 cores and 4 gb Ram (with Logstash using 2gb)
Fewer than 20 hosts are only sending their system logs so it's only 30-140 events per minute. Very few other filters are active (it's a new setup) and Logstash is doing fine when I remove the file this filter is in. I copy the whole file so you see there's no other filter in it.
filter {
if [program] == "setroubleshoot" {
if [message] =~ /^SELinux is preventing/ {
grok {
match => ["message","SELinux is preventing %{UNIXPATH:application} from %{WORD:typeofaccess} access on the %{WORD:filetype} %{UNIXPATH:filename} For complete SELinux messages. run sealert -l %{NOTSPACE:sealertmessage}"]
id => "setroubleshootprevent"
add_tag => ["setroubleshootprevent","grokked"]
tag_on_failure => ["_grokparsefailure","setroubleshootprevent_failed"]
}
}
}
}
The pattern is an early version and may not be perfect but it should not be bad enough to block Logstash. I'm using Logstash 6.2.4 and tried to update grok
but it seems there is no new version available.
Can you help me?
Cheers,
Thomas