Using syslog input plugin not working

Dear all,

I'm having a strange issue.

I have a pipeline listening for syslog data. When the data comes in, I just see there is a new connection coming from the client (the syslog server) and the connection if close. Nothing is being processed, but using tcpdump, I can see the data coming in.

Let's starts from the beginning. Here is my pipeline:

input {
 syslog {
   port => 5570
   grok_pattern => "(%{SYSLOG5424PRI})?%{GREEDYDATA:message}"
   tags => "fileoutput"
 }
}

filter {
  mutate {
    copy => {"message"  => "[event][original]"}
  }

  # Get event ingested timestamp
  ruby { 
    code => "event.set('[event][ingested]', Time.now());"
  }

  # This will log the hostname of the logstash server who process the log.
  ruby {
    init => "require 'socket'"
    code => "event.set('Logstash.Hostname', Socket.gethostname)"
  }   

  grok {
    # This pattern will match Passwordstate syslog entries
    # More data handling for 'message' field will be done below. See <Section pour PasswordState>.
    patterns_dir => "/etc/logstash/pipelines/patterns"
    match => ["message", "(%{SYSLOG5424PRI:syslog_index})?%{PSTATE_TIMESTAMP:syslog.date} %{SYSLOGHOST:syslog.ip}\s+%{DATA:[syslog][program]}: %{GREEDYDATA:[syslog][message]}"]
    tag_on_failure => ["_main_syslog_grok_parseerror"]
    overwrite => ["message"]
  }

  date {
    match => [ "[syslog.date]", "yy-MM-dd HH:mm:ss" ]
    remove_field => "[syslog.date]"
  }

  ##############################
  # Section pour PasswordState #
  ##############################
  if( [syslog][program] == "Passwordstate" ) {
     # do some processing; removed to reduce number of character in this post  
  }   
}

output {
  # Add a tag "fileoutput" to the filebeat source to send to a file (for tests or debuging) or "fileonly" to avoid sending to the cluster ELK
  if "fileoutput" in [tags] or "fileonly" in [tags] {
    file {
      path => "/data/logstash/p5570-Syslogs.log"
      codec => rubydebug {
        metadata => true
      }
    }
  }
}

A data line that comes from the system looks like this:

<110>2024-11-20 13:58:22 60.62.64.69   Passwordstate: Manual logoff for UserID 'domain\\user1234' from the IP Address '60.63.65.163'. Client IP Address = 60.63.65.163

Here is an output from the tcpdump command:

14:17:49.540762 IP 10.32.14.89.55534 > 10.32.14.228.5570: Flags [P.], seq 1:273, ack 1, win 8212, length 272
        0x0000:  4500 0138 bf0e 4000 8006 0935 0a20 0e59  E..8..@....5...Y
        0x0010:  0a20 0ee4 d8ee 15c2 910b 9139 9f4d 285b  ...........9.M([
        0x0020:  5018 2014 2119 0000 3c31 3130 3e32 3032  P...!...<110>202
        0x0030:  342d 3131 2d32 3020 3134 3a31 373a 3437  4-11-20.14:17:47
        0x0040:  2031 302e 3332 2e31 342e 3839 2020 2050  .60.62.64.69...P
        0x0050:  6173 7377 6f72 6473 7461 7465 3a20 5068  asswordstate:.Us
        0x0060:  696c 6970 7065 2044 6f79 6f6e 2028 7573  sernam.testt.(do
        0x0070:  6865 7262 726f 6f6b 655c 646f 7970 3137  mainsdoke\abcd12
        0x0080:  3031 2920 7365 6e74 2061 2053 656c 6620  01).sent.a.Self.
        0x0090:  4465 7374 7275 6374 204d 6573 7361 6765  Destruct.Message
        0x00a0:  2c20 6e6f 7420 7265 6c61 7465 6420 746f  ,.not.related.to
        0x00b0:  2061 2073 7065 6369 6669 6320 7061 7373  .a.specific.pass
        0x00c0:  776f 7264 2072 6563 6f72 642c 2066 726f  word.record,.fro
        0x00d0:  6d20 7468 6520 6d61 696e 2054 6f6f 6c73  m.the.main.Tools
        0x00e0:  204d 656e 7520 746f 2074 6865 2065 6d61  .Menu.to.the.ema
        0x00f0:  696c 2061 6464 7265 7373 206f 6620 444f  il.address.of.ab
        0x0100:  5950 3137 3031 4075 7368 6572 6272 6f6f  cd1201@domainsdo
        0x0110:  6b65 2e63 612e 2043 6c69 656e 7420 4950  ke.com..Client.IP
        0x0120:  2041 6464 7265 7373 203d 2031 3332 2e32  .Address.=.182.1
        0x0130:  3130 2e31 302e 3736                      10.30.176

At the same time I will see those line on the logstash log file:

[2024-11-20T14:17:47,616][INFO ][logstash.inputs.syslog   ][5570_Syslogs][e912bdf967c82380c91bba3ff10f3cda089aaa34a894dfba51c78d37e215b01d] new connection {:client=>"69.62.64.69:55600"}
[2024-11-20T14:17:47,617][INFO ][logstash.inputs.syslog   ][5570_Syslogs][e912bdf967c82380c91bba3ff10f3cda089aaa34a894dfba51c78d37e215b01d] connection closed {:client=>"69.62.64.69:55600"}				

But nothing is being processed; the event is just dropped.

Then I change the input plugin by this:

input {
  tcp {
    port => 5570
    type => "syslog"
    codec => line {
      charset => "ISO-8859-1"
    }    
    tags => "fileoutput"
  }
}

And now, by changing this, it works.

Does anyone have an idea why the syslog input plugin is dropping the event that comes from that source (I have several other source using the same pipeline and they are all working, except the one coming from our password state environment).

This is also strange the data coming to the syslog port (5570/tcp) is being dropped out without any errors from the logstash log file.

If you need any addiotional information, please let met know!

Note: IP addresses and sensitive data has been replaced by fake information for privacy purpose.

Best Regards,
Yanick

What is sending the data?

I suspect that the tool being used to send the data does not send messages according to the RFC3164, which is the format supported by this input.

Also, in your tcp you changed both the codec and the charset, but didn't change it to the syslog input, why this was changed? Have you tried using the same codec/charset in the syslog input?

Personally I never use the syslog input as it does not work with everything, not all Syslog are the same, I prefer to use the tcp or udp inputs, the main difference with the syslog input is that will parse the message without you need to add a filter, but you can add a filter in the filter block and have the same behaviour.

1 Like

Hi @leandrojmp !

Thank you so much for your reply!

The reason about the charset and the codec is simple: I took it from another pipeline I have without modifing it. I do not try to remove the codev as well as the charset from the TCP input filter to see if it's working.

To answer your question, those logs are coming from a third party system running "Password State". Couple years ago, I configured the ingestion of those logs into Elastic using the syslog plugin and it was working perfectly.

Yesterday one on my colleague told me he cannot find those logs anymore in Kibana, so I started to investigate and those were my findings; all of the sudden it stop working using in syslog input plugin. Those logs were not accessed very often, so I don't know since when we have this issue and all older logs have been wiped out.

Since the TCP input filter is working as expected, with some more filter as you said, I think I will let it go like that. I just created another pipeline on another port specific for this app (I do not have any other TCP input pipeline).

Maybe an update to that system change the way the logs are being sent and are no longer compatible with RFC3164.

Thanks again!

Yanick