Hi everyone. I am a beginner in logstash and ELK in general. Lately I have been facing an issue with logstash.
I am receiving some logs on port 3014, and this is the second time that this is happening and I identified this pattern. Every time that a log that can't be decoded hits on port 3014, logstash stops processing any logs. What I mean? I will try to explain this as clear as I can, and I apologise if I miss something.
My logstash configuration looks like this:
input {
syslog {
port => 3014
codec => cef
syslog_field => "syslog"
grok_pattern => "<%{POSINT:priority}>%{SYSLOGTIMESTAMP:timestamp}"
}
}
filter {
prune {
whitelist_names => ["@timestamp", "message", "name","destinationUserName","sourceUserName","ad.loginName", "sourceServiceName","ad.destinationHosts","userID", "deviceAction", "deviceEventClassId"]
}
mutate { gsub => [ "ad.loginName", "USERNM[\\]", "" ] }
if [destinationUserName] and [sourceUserName] {
mutate { add_field => { "userID" => "%{ad.loginName}" } }
} else if [destinationUserName] {
mutate { add_field => { "userID" => "%{destinationUserName}" } }
} else if [sourceUserName] {
mutate { add_field => { "userID" => "%{sourceUserName}" } }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash_index"
}
stdout {
codec => rubydebug
Now is the second time that when I see this log
That contains some gibberish text, logstash stops receiving any sort of data on port 3014.
The only way that I can start receiving the logs again, is if I restart the vm.
I was wondering if this I a case on which logstash doesn't know how to deal with this specific event and it throws an error and stops receiving logs and if there is any workaround this issue.
Thank you very much for your time and help and I apologise again for the basic question