Logstash unresponsive after new grok pattern added


(Peter) #1

Hello,

Logstash hangs after processing a few log files. Consumes 100% CPU, ignores SIGTERM and systemctl commands and can only be stopped with kill -9. It stops processing events being sent to it and outputs nothing to ES. Finally, there are no error messages in logstash.err/log/stdout.

Version:
LS 1.4.2 (this failed, so I upgraded to v1.5)
LS 1.5.0 (and go the exact same behaviour)

OS
Centos 7

Config:

input{
tcp {
port => 20514
type => "syslog"
}
}

filter {
grok {
match => [ "message", "%{SYSLOGTIMESTAMP:@timestamp} %{SYSLOGHOST:hostname} (?<json_msg>{.})" ]
match => [ "message", "{"Time":"%{SYSLOGTIMESTAMP:timestamp}","Type":"%{DATA:Type}","
Hostname":"%{DATA:Hostname}","SourceModuleName":"%{DATA:SourceModuleName}","Logger":"%{DATA:Logger}
","Severity":"%{DATA:Severity}","Message":" %{DATA}=%{TIME:time} %{DATA}=%{DATA:devname} %{DATA}=%{DATA:
device_id} %{DATA}=%{DATA:log_id} %{DATA}=%{DATA:type} %{DATA}=%{DATA:subtype} %{DATA}=%{DATA:pri} %{DATA}=%{DATA
:vd} %{DATA}=%{DATA:SN} %{DATA}=%{DATA:duration} %{DATA}=%{DATA:user} %{DATA}=%{DATA:group} %{DATA}=%{DATA:rule}
%{DATA}=%{DATA:policyid} %{DATA}=%{DATA:proto} %{DATA}=%{DATA:service} %{DATA}=%{DATA:app_type} %{DATA}=%{DATA:st
atus} %{DATA}=%{DATA:src} %{DATA}=%{DATA:srcname} %{DATA}=%{DATA:dst} %{DATA}=%{DATA:dstname} %{DATA}=%{DATA:src_
int} %{DATA}=%{DATA:dst_int} %{DATA}=%{DATA:sent} %{DATA}=%{DATA:rcvd} %{DATA}=%{DATA:sent_pkt} %{DATA}=%{DATA:rc
vd_pkt} %{DATA}=%{DATA:src_port} %{DATA}=%{DATA:dst_port} %{DATA}=%{DATA:vpn} %{DATA}=%{DATA:tran_ip} %{DATA}=%{D
ATA:tran_port} %{DATA}=%{DATA:dir_disp} %{DATA}=%{DATA:tran_disp} "}" ]
match => [ "message", "(?<json_msg>{.
})" ]
match => [ "message", "%{SYSLOGTIMESTAMP:@timestamp} %{SYSLOGHOST:hostname} %{GREEDYDATA:Message}
" ]
}
json {
source => "json_msg"
remove_field => ["json_msg"]
remove_field => ["message"]
}
mutate {
gsub => [
"Severity", "ERR$", "ERROR",
"Severity", "EMERG", "ERROR",
"Severity", "ALERT", "ERROR",
"Severity", "WARN$", "WARNING",
"Severity", "NOTICE", "WARNING"
]
}

output{
elasticsearch{
host => localhost
protocol => http
}
}

If I remove the second grok statement(needed to process our Firewall logs) I do no encounter this error. I've check the Firewall grok using online grok testers and found it to work. In fact, in the brief time before LS hangs, all log entries including the Firewall are correctly parsed and sent to ES.

Help.


(Magnus B├Ąck) #2

It may or may not have something to do with your problems, but I strongly suggest that you use the kv filter instead of your current grok filter. Well, you'll need to keep the grok filter but you can scale down the expression and lot and use the kv filter for parsing the key/value pairs.

Also, use a json filter instead of parsing a JSON-formatted message with a grok expression.


(Peter) #3

Hello,

After doing some research I found that many people have had a similar issue with the Logstash agent dying on some regexps. So, I remade my filter and used more specific patterns were appropriate and this works.

@magnusbaeck, I looked at the KV filter you mentioned and I would like to make use of it as the data I'm receiving is not consistent and will require 2 additional filters to match almost all the log messages. So I think the KV parser is the way to go. Cheers.


(system) #4