Logstash cmd displaying wrong output

Hi, I am new to ELK stack and I am trying to test out parsing sample syslog messages by following the steps mentioned on the official website --
https://www.elastic.co/guide/en/logstash/current/config-examples.html#_processing_syslog_messages

This is my config file

input {
  tcp {
    port => 5000
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
  }
  date {
    match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
   }
  }
}

output {
  elasticsearch { # output to ES
    hosts => [ "localhost:9200" ]
    index => "indexforsyslog1"
  }
  stdout { codec =>  "rubydebug" }
}

after I use telnet client and paste the sample syslog messages (given on website), I receive the following output on original logstash cell

{
          "tags" => [
        [0] "_grokparsefailure"
    ],
       "message" => "\u0016\r",
          "port" => 50792,
          "type" => "syslog",
      "@version" => "1",
          "host" => "0:0:0:0:0:0:0:1",
    "@timestamp" => 2022-02-11T06:38:56.126Z
}

I need help in figuring out why this happens and how to solve it to get the right output. Please help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.