Issue with replacing @timestamp field with the timestamp from logs using date filter in logstash

Hi,
We are just learning to setup the ELK stack that includes Filebeats --> Logstash --> Elasticsearch <-- Kibana.

Filebeats on our server sends the logs to Logstash, which is configured to send it to Elasticsearch and Kibana is configured to display it on UI.

We are trying to replace the @timestamp field on Kibana UI with the timestamp from the log entry using date filter of logstash.

Here is how our log entry looks like:
30 Jul 2020 01:35:39,762 WARN  [DebugLogger 245] (Thread-93082 (ActiveMQ-client-factory-threads-910567953)) O_DT-ET0003 TransportMDB.onMessage Exception: Component TroubleTicket rejected message. caused by Could not send trouble ticket caused by Trouble ticket service cannot be reached

We now want to replace the @timestamp field on Kibana UI with the timestamp - 30 Jul 2020 01:35:39,762

When we configure our 02-beats-input.conf on logstash as below:
filter {
	grok {
		patterns_dir => ["/etc/logstash/patterns/"]
		match => {
			"message" => ["%{DATESTAMP:logTime}%{SPACE}%{WORD:logLevel}%{SPACE}\[%{DATA:logger}\] \(%{DATA:Thread}\) %{DATA:msgCode} (?<logMessage>.*)","%{DATESTAMP:logTime}%{SPACE}%{WORD:logLevel}%{SPACE}\[%{DATA:logger}\] \(%{DATA:Thread}\) (?<logMessage>.*)","%{TIME:logTime}%{SPACE}%{WORD:logLevel}%{SPACE}\[%{DATA:logger}\] \(%{DATA:Thread}\) (?<logMessage>.*)","%{TIMESTAMP_ISO8601:logTime}%{SPACE}%{WORD:logLevel}%{SPACE}\[%{DATA:Thread}\] (?<message>.*)","%{TIME:logTime} \[%{DATA:Thread}\] %{WORD:logLevel}%{SPACE}(?<message>.*)","%{DATESTAMP_ANOTHER:logTime} \[%{DATA:Thread}\] %{WORD:logLevel}%{SPACE}%{DATA:className} (?<message>.*)"]
		}
	}
	date {
		match => [ "logTime", "dd MMM YYYY HH:mm:ss,SSS"]
	}
}
we stop seeing any logs on the Kibana UI. But, if we give an incorrect format in the date filter, we start receiving the logs on the Kibana UI but it comes with the _dateparsefailure tag.

Not sure how to get around this issue.

Thank you.
Saurabh.

Could you post the rubydebug output and Logstash logs?
And – sorry, but I have to ask –are you sure that there are no documents in ES and that they are not just outside of your selected time range in Kibana?

(Please put only the code in the code block in your post, not the body text. It's easier to read the text if you don't have to scroll sideways on mobile.)

Hi Jenni,

I'm updating my original post here and providing the details you asked for. Please note, I've redacted some of the information.

Here is our current filter configuration:

filter {
  grok  {
    patterns_dir => ["/etc/logstash/patterns/"]
    match => {
      "message" => ["%{DATESTAMP:logTime}%{SPACE}%{SALLOGLEVEL:logLevel}%{SPACE}\[%{DATA:logger}\]%{SPACE}\(%{DATA:Thread}\)%{SPACE}%{DATA:msgCode} (?<logMessage>.*)",
                        "%^{TIME:logTime}%{SPACE}%{SALLOGLEVEL:logLevel}%{SPACE}\[%{DATA:logger}\]%{SPACE}\(%{DATA:Thread}\) (?<logMessage>.*)",
                        "%{TIMESTAMP_ISO8601:logTime}%{SPACE}%{SALLOGLEVEL:logLevel}%{SPACE}\[%{DATA:Thread}\] (?<logMessage>.*)",
                        "%^{TIME:logTime} \[%{DATA:Thread}\]%{SPACE}%{SALLOGLEVEL:logLevel}%{SPACE}(?<logMessage>.*)",
                        "%{DATA:logTime}%{SPACE}%{SALLOGLEVEL:logLevel}%{SPACE}\[%{DATA:logger}\]%{SPACE}\(%{DATA:Thread}\)%{SPACE}%{DATA:msgCode} (?<logMessage>.*)",
                        "%{DATA:logTime}%{SPACE}\[%{DATA:Thread}\]%{SPACE}%{SALLOGLEVEL:logLevel}%{SPACE}%{DATA:className} (?<logMessage>.*)"]
    }
  }
  date {
    match => [ "logTime", "dd MMM YYYY HH:mm:ss,SSS", "MMM dd YYYY HH:mm:ss","MMM dd YYYY HH:mm:ss.SSS", "HH:mm:ss,SSS", "MMM dd, YYYY @ HH:mm:ss.SSS" ] target => "@timestamp"
  }
}

Here is the output configuration:

output {
  file  {
    path => "/tmp/logstash-output.log"
    codec => rubydebug
  }
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[fields][log_type]}-%{+YYYY.MM.dd}"
  }
}

Here is a snippet of the rubydebug output:

{
           "ecs" => {
        "version" => "1.4.0"
    },
        "fields" => {
        "log_type" => "xxx-xxx-operational-log"
    },
           "log" => {
          "file" => {
            "path" => "/xxx/xxx/log/xxxoperational.log"
        },
        "offset" => 20568955
    },
        "Thread" => "Thread-93082 (ActiveMQ-client-factory-threads-910567953",
    "@timestamp" => 2020-07-30T22:42:04.932Z,
         "input" => {
        "type" => "log"
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
          "host" => {
         "architecture" => "x86_64",
                 "name" => "xxx.xxx.com",
        "containerized" => false,
             "hostname" => "xxx.xxx.com",
                   "os" => {
                "name" => "Oracle Linux Server",
            "platform" => "ol",
              "family" => "",
              "kernel" => "4.14.35-1844.3.2.el7uek.x86_64",
             "version" => "7.6"
        },
                   "id" => "6de0694567f14eaa9c6d0fefd35c3de1"
    },
         "agent" => {
            "hostname" => "xxx.xxx.com",
                "type" => "filebeat",
                  "id" => "5abee680-bff0-41df-8702-12e87ed5c707",
        "ephemeral_id" => "444da032-ad60-469a-bb11-50b98a75dad3",
             "version" => "7.6.2"
    },
        "logger" => "OperationalLogger",
       "message" => "31 Jul 2020 04:12:04,932 WARN  [DebugLogger 245] (Thread-93082 (ActiveMQ-client-factory-threads-910567953)) O_DT-ET0003 TransportMDB.onMessage Exception: Component TroubleTicket rejected message. caused by Could not send trouble ticket caused by Trouble ticket service cannot be reached",
      "@version" => "1",
      "logLevel" => "WARN",
       "logTime" => "31 Jul 2020 04:12:04,932",
       "msgCode" => ")",
    "logMessage" => "O_DT-ET0003 TransportMDB.onMessage Exception: Component TroubleTicket rejected message. caused by Could not send trouble ticket caused by Trouble ticket service cannot be reached"
}

When we enable the date filter, Kibana UI stops receiving any log entries. As soon as we remove the date filter or give an incorrect pattern, the log entries appear again immediately but with _dateparsefailure.

Hi Jenni,

The issue has been resolved on our end.

  1. We were getting the _dateparsefailure because we had missed one time format within the date filter.
  2. The date range that I was selecting was indeed small - only fifteen minutes. When I changed that to 30 dates and recreated the indices and restarted filebeat and logstash, I got all the data.

We can close the discussion.

Thank you.

Saurabh.