Cisco syslogs showing up as grokparsefailure when sent to Logstash

I am working through a Udemy course to familiarize myself with Elasticsearch but I've ran into a wall when applying it to our current environment when it comes to Logstash. I am trying to send data from Cisco syslog but when I view it when running in the cli the result is grokparsefailure.

         "type" => "syslog",
           "message" => "<189>751: Jul  9 17:55:29.743: %SYS-5-CONFIG_I: Configured from console by admin on vty0 (172.16.5.52)",
              "host" => "10.15.0.8",
          "@version" => "1",
        "@timestamp" => 2020-07-09T17:55:30.744Z,
              "tags" => [
            [0] "_grokparsefailure"

I have the following conf file:

    input {
      tcp {
        port => 514
        type => syslog
      }
      udp {
        port => 514
        type => syslog
      }
    }

    filter {
      if [type] == "syslog" {
        grok {
          match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }

    output {
      elasticsearch { hosts => ["localhost:9200"] }
      stdout { codec => rubydebug }
    }

I know I'm new enough at this to not know what I don't know. My initial thoughts are maybe there needs to be a separate indices for this but then I am not sure how they would be tagged in order to separate from the others. Could it be that the format of my logs aren't matching with the expected input? I've seen talk of a Cisco module as well. I've been looking for answers but since I'm at that point of not knowing what I don't know, nothing ever seems to jump out as an answer.

Thank you.

Yes. Your pattern

"%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"

expect a timestamp followed by a hostname, program name and pid. Something like

Jul  9 17:55:29.743 localhost inetd:[1234]: some message

But that is not what your message looks like, so you get a parse failure.

There is an example of parsing Cisco log messages here. This might also help. Also this.

Thank you so much for the information.
This is very humbling. I've spent the last 3 hours trying to go through those links and understand and I'm still at a complete loss.

I understand that the date information isn't being parsed properly and I have tried to use the date patterns from the links you provided and they won't even allow logstash to start. I just get cryptic pipeline errors that some characters were expected at the ends of the lines.

I tried adjusting the match field down to just

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{GREEDYDATA:syslog_message}" }

which I thought would just pull the timestamp and then put everything after it into the message but when I do that, although Logstash starts, I don't even see the log come through from the stdout.

That has a space after the timestamp, your message has a colon there.

BTW, just starting with one field and then expanding the pattern one or two fields at a time is a good idea.

Thank you for your help. After more testing and trying I was able to find that

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}%{GREEDYDATA:syslog_message}" }

was a good start and at least getting me past the initial grok failure. Now to work on adding the other pieces to parse them out. There is still a lot that I don't yet 'grok' in grok but at least I can get data input in the meantime.

Thank you!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.