Logstash/syslog/CEF 1024 character limit

I have an issue with some devices sending CEF encoded TCP syslog messages to Logstash, where long messages (greater than 1024 characters) are split at the 1024 character position, with each section then being processed separately. Obviously the first part is processed successfully, but some of the key/value pairs are missing, and the second part fails because it is not a valid syslog or CEF encoded message. I'm struggling to understand why this is happening, or to find anything on Google or searching this forum. Can anyone enlighten me please? As well as in a production system, I can see it in a simple test instance launched as follows:

sudo bin/logstash -e 'input { tcp { port => 1514 type => "syslog" codec => cef{} } } output { stdout { } }'

and then using netcat to send test CEF encoded syslog messages at it:

nc localhost 1514 < test23.cef

The contents of the test files being similar to:

<14>Nov 5 17:56:00 foo0bar.co.uk CEF:0|Foobar Co|FOO44|1.2.33|x4|Hit|1|deviceExternalId=2278 src=111.222.111.222 dst=222.111.222.42 etc

I resolved this by adding a linefeed delimiter for the CEF codec, e.g.

sudo bin/logstash -e 'input { tcp { port => 1514 type => "syslog" codec => cef { delimiter => "\n" } } } output { stdout { } }'

The incoming TCP stream delimits each message using a linefeed.

Thanks for the update... does that truncate the message if it exceeds 1024 characters?

No it does not, which is good, because plenty of the events I am processing are greater in length than that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.