Logstash eating logs


#1

Hello there,

I have a 2 node logstash cluster receiving 3000 messages per second on average. I have just noticed that it is not processing all the messages it receives - they are lost somewhere (UDP packets are coming in but logstash seems to be ignoring them). I am using nxlog and sending logs in gelf format through UDP. Logstash input config looks like this:

gelf {
port => 12205
type => windows
codec => "json" }

gelf {
port => 12201
type => windows
codec => "json" }

12201 is the main port I am using where hundreds of Windows servers are sending eventlogs while input with port 12205 was created for troubleshooting purposes. I'll explain why.

I picked up one server and compared the logs in the eventlog and logs that are in elasticsearch and found out that most of them never reached elasticsearch (checked with standard output to rule out any issues with elasticsearch).

I manually created test events on that server and set up standard output on logstash to capture them. Most of the events never showed up on the screen (only 1 out of 10 showed up).

Then I decided to create a second gelf input with port 12205 (while keeping port 12201 open and hundreds of servers still sending logs) and did not experience any message loss. The problem is, I have to use one port and cannot afford splitting servers between different ports.

Any idea why this is happening and what should I do to remedy this? I appreciate your help.


(system) #3