Logs Cascading issue with syslog and logstash multiline configuration in ELK setup

In ELK i have found that during implementation using syslog as a shipper for logs to logstash server cause couple of issues like

  1. During regression mulltiple logs shipped from logs file to logstash server at same time.(in JAVA application) cauing cascading issue when multiline plugin defined in configuration of logstash server.
  2. If you remove multiline plugin, log message broker down to small chunk and hence a single instance on log block(JSON) didnt get captured.

As syslog shipped logs in logstash server...is not persistance with sequence in case of high volumn.

As per solution : We have get rid of syslog and used LogstashTcpSocketAppender to improve performace.
refer link for more info on implementation.

As this give substential performace...as logs are persitant to event order that happen.
also its reaches to kibana page in real time when compare to syslog.
as it cut 1 layer down. as show below:

Earlier :
Application --> application log file --> using syslog --> logstash server --> elastic search --> kibana
Application --> using tcp appender --> logstash server --> elastic search --> kibana

try this at your end and share input on same.

What's this "cascading issue" that you're talking about?

when json log input to logstash server...

  1. example json log contain long message feild, probably multiple line for a parameter...We need to used multi line plugin to parse that in logstash server. If the shipper is syslog.
    logstash capture json logs show incosistant behavious..some logs get merged to other set and json log inconsistance visible for those logs message. either some message get clubbed or broked(half message ) stored in eleastic search db through logstash server. this happen due to incorrect parsing.
    So syslog and multiline doestn work as expected.

tcp appender do maintain correct sequence and streaming of logs and same can be captured to logstash server without multiline plugin.