Cannot get original timestamp from WSo2 logs coming from Source -> filebeat -> logstash -> Elasticsearch

I currently have logstash and filebeat running on the same AWS EC2 as a proof of concept to get logs from efs and push to Elastic for us to view on Kibana. It is all set up and working nicely, there is just one issue I cant seem to figure out and thats getting the original WSo2 Log timestamp to come through all the way to Kibana and use that as the @timestamp field. I assume its something I am doing wrong on logstash config from what I have read, that is where I should be setting it.

Config is as follows:

input {
    beats {
         type => "beats"
         host => "127.0.0.1"
         port => 5044
    }
}
filter {
  if [type] == "beats" {
    grok {
      match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{SPACE}%{SPACE}%{LOGLEVEL:level}%{SPACE}%{NOTSPACE:class}%{SPACE}-%{SPACE}%{SPACE}%{JAVALOGMESSAGE:log_message}" }
      tag_on_failure => ["failed-to-parse"]
      remove_field => [ "message" ]
    }
    date {
      match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
      target => "@timestamp"
    }
  }
}

What does a message look like?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.