Replace @timestamp with source log timestamp

Dear Community,

I´m quite new in ELK and also here in the community. So at first hi everybody!
I´ve spent a couple of hours by solving this issue. A lot of ther topics here in the community had simmilar issues. But those didn´t helped me so I opened a new topic with the hope that you can help me.

My goal is to get the timestamp from my source log files into elasticsearch/kibana as main time/date filed.
Currently when I create an new index pattern in kibana i can only choose @timestamp because it´s the only one from type "date". So I tried to "override" the @timestamp field without success.

Below a source log example and my config:

Source Log Example:
2018-12-03 15:53:33 193.178.171.1 - HTTP 192.168.17.10 80 GET /ogd_routing/XML_TRIP_REQUEST2/XML_DM_REQUEST sessionID=0&locationServerActive=1&type_dm=any&name_dm=%2C+Rosenh%C3%BCgelstra%C3%9Fe 200 510 6052 0 HTTP/1.1 - - -

Logstash Pattern:
CITRIXNSVOR %{TIMESTAMP_ISO8601:event_timestamp} %{IPV4:s-ip} - %{WORD:type} %{IPV4:c-ip} %{NUMBER:s-port} %{WORD:method} %{URIPATH:path_request} %{DATA:cs-uri-query} %{NUMBER:response} %{NUMBER:cs-bytes} %{NUMBER:sc-bytes} %{NUMBER:time-taken} %{DATA:cs-version} %{GREEDYDATA:cs-header-user-agent} %{GREEDYDATA:cs-header-cookie} %{GREEDYDATA:cs-header-referer}

Logstash Config:
filter {
if "citrix-netscaler-vor" in [tags] {
grok {
patterns_dir => [ "/usr/share/logstash/patterns/" ]
match => [
"message", "%{CITRIXNSVOR}"
]
}
date {
match => [ "event_timestamp", "ISO8601" ]
target => "@timestamp"
}

The result is a _dateparsefailure. All fields were matched good but unfortunately no success with the timestamp.
@timestamp is still the import/index-time and event_timestamp is the timestamp from the log file.

I have excluded the config where I add the tag.

Maybe you are able to help me to get this working.

Regards
Wilhelm

Happy new year....

has nobody an idea how to solve this?

Hi,

This happens because date filter does not recognize yyyy-MM-dd HH:mm:ss as a ISO8601 timestamp.
Try it this way:

date {
  match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss" ]
  timezone => "Europe/Vienna" # This may be needed if the log source timestamp is local time.
}

Hello,

based on your response I have changed following:

Logstash Pattern
CITRIXNSVOR (?<event_timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}) %{IPV4:s-ip} - %{WORD:type} %{IPV4:c-ip} %{NUMBER:s-port} %{WORD:method} %{URIPATH:path_request} %{DATA:cs-uri-query} %{NUMBER:response} %{NUMBER:cs-bytes} %{NUMBER:sc-bytes} %{NUMBER:time-taken} %{DATA:cs-version} %{GREEDYDATA:cs-header-user-agent} %{GREEDYDATA:cs-header-cookie} %{GREEDYDATA:cs-header-referer}

Filter
filter {
if "citrix-netscaler-vor" in [tags] {
grok {
patterns_dir => [ "/usr/share/logstash/patterns/" ]
match => [
"message", "%{CITRIXNSVOR}"
]
}
date {
# match => [ "event_timestamp", "ISO8601" ]
match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss" ]
timezone => "Europe/Vienna" # This may be needed if the log source timestamp is local time.
# target => "@timestamp"
}}

The result is still no new available "timestamp" field when I try to create a new index pattern.

Also I get now a _grokparsefailure in tags.

Regards

Why did you change your Grok pattern?
Leave it as it was, grok filter has nothing to do with date filter :slight_smile:

Also, remove the comment in timezone line as I am not sure if those are supported in Logstash, it was just FYI.

Because I thought it needs a better match. Ok I have undone this to following:

Pattern
CITRIXNSVOR %{TIMESTAMP_ISO8601:event_timestamp} %{IPV4:s-ip} - %{WORD:type} %{IPV4:c-ip} %{NUMBER:s-port} %{WORD:method} %{URIPATH:path_request} %{DATA:cs-uri-query} %{NUMBER:response} %{NUMBER:cs-bytes} %{NUMBER:sc-bytes} %{NUMBER:time-taken} %{DATA:cs-version} %{GREEDYDATA:cs-header-user-agent} %{GREEDYDATA:cs-header-cookie} %{GREEDYDATA:cs-header-referer}

Filter
filter {
if "citrix-netscaler-vor" in [tags] {
grok {
patterns_dir => [ "/usr/share/logstash/patterns/" ]
match => [
"message", "%{CITRIXNSVOR}"
]
}
date {
match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss" ]
timezone => "Europe/Vienna" # This may be needed if the log source timestamp is local time.
}}

Still no new availabe date-filter:

and also still a grokparsefailure.

Regards

Date filter doesn't contribute to grok parse failure.

Check what you broke - or was it ever working properly.
Only after that the date filter has a chance to work.

Does the event_timestamp get parsed correctly at the moment?

Ok thanks thats important to know.

It has never worked properly.

Version 1: every fields were parsed correctly. Also a event_timestamp field exists but I had no chance to choose this as main timestamp in the index pattern. But in tags I had a dateparsefailure

Version 2: only a few fields were parsed correctly some were not even recognized. In tags --> grokparsefailure

Now: I have checked my pattern in grok debugger which works fine.
The index pattern has still only @timestamp as an available time format.

If I still create an index pattern all fields were recognized:

But when I discover those index nothing has been parsed correctly.

Regards

I think you have a major misunderstanding :slight_smile:

It is recommended that you will use only one field for the timestamp, at that is the default @timestamp. You should not use anything else - in a normal case. @timestamp will be the datetime when the log line was created and in Elasticsearch it is stored as a ISO8601 formatted datetimestamp in UTC.
Of corse, you could have special cases where you would like to save it to another field in a different format and/or timezone, but this doesn't seem to be the case.

With date filter, you can parse the original timestamp from your log line and store it in the @timestamp field. This is the default behavior. When you define the timezone attribute in date filter, it considers the timestamp in the log line to be in that timezone, so it will convert it to UTC.

So you should not expect a new field which would have a correct datetime, so there is no need to create a new index pattern every time. When the date filter starts to work, you will just see newly ingested logs with their correct timestamp in @timestamp -field. You can see the raw data when you open the log line in Kibana (your 1st screenshot) and select "JSON" tab. It is ISO8601 formatted timestamp in UTC. But you see it in your local time in Kibana, because Kibana convets it in your browser to your timezone and formats it if you have specified it to do so in Advanced Settings, in Kibana.

Now back to your current issue.

Version 1: Seems to work pretty well in general, but something is failing if you see _grokparsefailures. Not fatal necessarily, but you should fix them.

Version 2: Totally broked grokking. Try to roll back to version 1, both rule config and grok pattern.

Now: The index pattern shows all the fields, because your Version 1 grok worked and the documents still exists in your indices. This is not a guarantee that your current Logstash filter config works at all, as you noticed in your last screenshot. Try to roll back to Version 1 and let's continue from there.

Thanks for your clearance.

I have deleted all kibana index patterns and also all specific indices to start at zero.

What would I like to achieve:

  • Match my source log files with a grok filter and store the data in the definied fields.
  • The only timestamp I need is the timestamp in my source log files.
    I don´t care about any other time value. For me it is useless when a file has been indexed. I need to know when an event has been occured and not when it has been indexed.

I have rolled back to version 1. Thats the config where I opened this topic.

LogPattern
CITRIXNSVOR %{TIMESTAMP_ISO8601:event_timestamp} %{IPV4:s-ip} - %{WORD:type} %{IPV4:c-ip} %{NUMBER:s-port} %{WORD:method} %{URIPATH:path_request} %{DATA:cs-uri-query} %{NUMBER:response} %{NUMBER:cs-bytes} %{NUMBER:sc-bytes} %{NUMBER:time-taken} %{DATA:cs-version} %{GREEDYDATA:cs-header-user-agent} %{GREEDYDATA:cs-header-cookie} %{GREEDYDATA:cs-header-referer}

Filter
filter {
if "citrix-netscaler-vor" in [tags] {
grok {
patterns_dir => [ "/usr/share/logstash/patterns/" ]
match => [
"message", "%{CITRIXNSVOR}"
]
}
date {
match => [ "event_timestamp", "ISO8601" ]
timezone => "Europe/Vienna" # This may be needed if the log source timestamp is local time.
target => "@timestamp"
}

With this config above all fields were parsed correctly but I get a dateparsefailure in tags. Which would be acceptable when my event_timestamp field would be the leading one.

When I change the date section to your recommendations the _dateparsefailure in tags is gone but instead i loose all field matches, all custom fields were not parsed any more and i got _grokparsefailure in tags.

That means if the date is matched correctly I loose all other matches and no custom fields were parsed.

What would I like to achieve:

  • Match my source log files with a grok filter and store the data in the definied fields.
  • The only timestamp I need is the timestamp in my source log files.
    I don´t care about any other time value. For me it is useless when a file has been indexed. I need to
    know when an event has been occured and not when it has been indexed.

Sounds perfectly fine and this was what I also assumed.

filter {
  if "citrix-netscaler-vor" in [tags] {
    grok {
      patterns_dir => [ "/usr/share/logstash/patterns/" ]
      match => [
        "message", "%{CITRIXNSVOR}"
      ]
    }
    date {
      match => [ "event_timestamp", "ISO8601" ]
      timezone => "Europe/Vienna" # This may be needed if the log source timestamp is local time.
      target => "@timestamp"
    }
  }
}

Please format your configs, it is a bit hard to read when there are no indentations.

In date filter, remove my comment after timezone definition, also the target is not necessarily needed, as per documentation @timestamp is the default target (but this doesn't make any difference).

With this config above all fields were parsed correctly but I get a dateparsefailure in tags. Which would be acceptable when my event_timestamp field would be the leading one.

dateparsefailure is never acceptable :wink:

When I change the date section to your recommendations the _dateparsefailure in tags is gone but instead i loose all field matches, all custom fields were not parsed any more and i got _grokparsefailure in tags.

I remember having this type of problem before. It has something to do with the order of filters, it should work sequentially, but sometimes I have experienced weird things. Try to separate the grok filter and date filter to their own filter sections:

filter {
  if "citrix-netscaler-vor" in [tags] {
    grok {
      patterns_dir => [ "/usr/share/logstash/patterns/" ]
      match => [
        "message", "%{CITRIXNSVOR}"
      ]
    }
  }
}
filter {
  if [event_timestamp] {
    date {
      match => [ "event_timestamp", "ISO8601" ]
      timezone => "Europe/Vienna"
    }
  }
}

At first... thanks for your time and help :slight_smile:

I have changed the grok filter based on your response.

filter {
  if "citrix-netscaler-vor" in [tags] {
    grok {
       patterns_dir => [ "/usr/share/logstash/patterns/" ]
       match => [
         "message", "%{CITRIXNSVOR}"
    ]
  }
 }
}
filter {
   if [event_timestamp] {
     date {
       match => [ "event_timestamp", "ISO8601" ]
       timezone => "Europe/Vienna"
  }
 }
}

Result: all fields were correctly matched and parsed | one index with all data inside | _dateparsefailure in tags

filter {
  if "citrix-netscaler-vor" in [tags] {
    grok {
       patterns_dir => [ "/usr/share/logstash/patterns/" ]
       match => [
         "message", "%{CITRIXNSVOR}"
                ]
  }
 }
}
filter {
   if [event_timestamp] {
     date {
       match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss" ]
       timezone => "Europe/Vienna"
  }
 }
}

Result: no custom fields were matched and parsed correctly | a lot of indicies splitted by date | _grokparsefailure in tags

At first... thanks for your time and help :slight_smile:

No problem, I have a bit boring day at the office...

Result: no custom fields were matched and parsed correctly | a lot of indicies splitted by date | _grokparsefailure in tags

This does not make any sense to me. The filters have nothing to do with each other, if the date filter runs before grok for some reason, it should just be omitted as the "event_timestamp" wouldn't exist. If the grok filter runs before date, it is not affected by date filter and you should either see the result you are seeing in the first run (dateparsefailure) or date filter succeeds and you will see the correct date.

Do you only have one config file?

Well at least I can reprocue this...

I have 4 config files and one custom pattern:

Logstash Pattern

 CITRIXNSVOR %{TIMESTAMP_ISO8601:event_timestamp} %{IPV4:s-ip} - %{WORD:type} %{IPV4:c-ip} %{NUMBER:s-port} %{WORD:method} %{URIPATH:path_request} %{DATA:cs-uri-query} %{NUMBER:response} %{NUMBER:cs-bytes} %{NUMBER:sc-bytes} %{NUMBER:time-taken} %{DATA:cs-version} %{GREEDYDATA:cs-header-user-agent} %{GREEDYDATA:cs-header-cookie} %{GREEDYDATA:cs-header-referer}
#CITRIXNSVOR (?<event_timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}) %{IPV4:s-ip} - %{WORD:type} %{IPV4:c-ip} %{NUMBER:s-port} %{WORD:method} %{URIPATH:path_request} %{DATA:cs-uri-query} %{NUMBER:response} %{NUMBER:cs-bytes} %{NUMBER:sc-bytes} %{NUMBER:time-taken} %{DATA:cs-version} %{GREEDYDATA:cs-header-user-agent} %{GREEDYDATA:cs-header-cookie} %{GREEDYDATA:cs-header-referer}

01-inputs

input {
   file {
     type => "citrix-netscaler-vor"
     path => [ "/NSWLLOG/*.log" ]
     #start_position => "beginning"
     #sincedb_path => "/dev/null"
        }
   file {
     type => "citrix-netscaler-vor"
     path => [ "/NSWLLOG/*.log*" ]
     #start_position => "beginning"
     #sincedb_path => "/dev/null"
        }
   file {
     type => "citrix-netscaler-vor"
     path => [ "/NSWLLOG/IMPORT/*.log" ]
     start_position => "beginning"
     sincedb_path => "/dev/null"
        }
   file {
     type => "citrix-netscaler-vor"
     path => [ "/NSWLLOG/IMPORT/*.log*" ]
     start_position => "beginning"
     sincedb_path => "/dev/null"
        }
      }

10-filter.conf

filter {
    if [type] == "citrix-netscaler-vor"
         {
          mutate {
                   add_tag => "citrix-netscaler-vor"
                   add_field =>
                   {
                     "syslog_server_name" => "VOR-LOG11.vor.int"
                     "syslog_server_type" => "CentOS 7"
                     "syslog_server_domain" => "vor.at"
                   }
                 }
         }	
	  }

25-netscalervor.conf

filter {
  if "citrix-netscaler-vor" in [tags] {
    grok {
      patterns_dir => [ "/usr/share/logstash/patterns/" ]
      match => [
      "message", "%{CITRIXNSVOR}"
               ]
          }
     }
}
filter {
  if [event_timestamp] {
     date {
       match => [ "event_timestamp", "ISO8601" ]
       timezone => "Europe/Vienna"
           }
     }
}
filter {
  if [s-ip] {
    geoip {
      source => "s-ip"
      target => "geoip"
      database => "/usr/share/logstash/vendor/geoip/GeoLite2-City.mmdb"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
           }		
     mutate {
       convert => [ "[geoip][coordinates]", "float" ]
            }
      }				
}

99-outputs.conf

output {
 #Enable stdout
 #Only selected logs are routed to stdout
 #Repalce "DEBUGTAG" tag to whatever filter you're debugging
 #if "DEBUGTAG" in [tags]{
 #  stdout {
 #    codec => rubydebug
 #  }
 #}
 #    
if "citrix-netscaler-vor" in [tags] {
  elasticsearch {
    hosts => ["192.168.100.162:9200"]
    index => "logstash-citrix-netscaler-vor-%{+YYYY.MM.dd}
                }
	}
else {
  elasticsearch {
     hosts => ["192.168.100.162:9200"]
     index => "logstash-citrix-netscaler-vor-unknown-%{+YYYY.MM.dd}"
                }
       }
}

Those are all my config files.

It would be interesting to see the difference with stdout rubydebug when:
1.

filter {
  if [event_timestamp] {
     date {
       match => [ "event_timestamp", "ISO8601" ]
       timezone => "Europe/Vienna"
           }
     }
}

is removed completely from the config.
2.

}
filter {
  if [event_timestamp] {
     date {
       match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss" ]
       timezone => "Europe/Vienna"
           }
     }
}

is applied.

Just shooting in the dark :slight_smile:

When I would know how to do this :slight_smile: i could provide an output.

It would be interesting to see the difference with stdout rubydebug when:

Regards

Have a look at this blog post for a guide to how to work with Logstash. This includes examples of how to send data to stdout during development.

You already have it there:

output {
 #Enable stdout
 #Only selected logs are routed to stdout
 #Repalce "DEBUGTAG" tag to whatever filter you're debugging

stdout {
  codec => rubydebug
}

Check Christian's link for further info.

I hope I have done this right....

ISO8601 in date filter

/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -r -f "/etc/logstash/conf.d/*.conf"

reduced output because of a lot of debug messages:

[2019-01-02T15:25:00,864][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
2019-01-02 15:25:00,869 LogStash::Runner DEBUG AsyncLogger.ThreadNameStrategy=CACHED
[2019-01-02T15:25:00,888][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-02T15:25:03,729][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, else, if, \", ', } at line 80, column 4 (byte 1783) after output {\n  if \"citrix-netscaler-vor\" in [tags]{\n    stdout {\n      codec => rubydebug\n    }\n  }", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:149:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:42:in `block in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:92:in `block in exclusive'", "org/jruby/ext/thread/Mutex.java:148:in `synchronize'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:92:in `exclusive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:317:in `block in converge_state'"]}
[2019-01-02T15:25:04,126][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
^C [2019-01-02T15:25:08,205][WARN ][logstash.runner          ] SIGINT received. Shutting down.

yyyy-MM-dd HH:mm:ss in date filter

/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -r -f "/etc/logstash/conf.d/*.conf"

reduced output because of a lot of debug messages:

[2019-01-02T15:30:16,703][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
2019-01-02 15:30:16,714 LogStash::Runner DEBUG AsyncLogger.ThreadNameStrategy=CACHED
[2019-01-02T15:30:16,747][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-02T15:30:19,582][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, else, if, \", ', } at line 80, column 4 (byte 1795) after output {\n  if \"citrix-netscaler-vor\" in [tags]{\n    stdout {\n      codec => rubydebug\n    }\n  }", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:149:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:42:in `block in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:92:in `block in exclusive'", "org/jruby/ext/thread/Mutex.java:148:in `synchronize'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:92:in `exclusive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:317:in `block in converge_state'"]}
[2019-01-02T15:30:20,002][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
^C[2019-01-02T15:30:27,086][WARN ][logstash.runner          ] SIGINT received. Shutting down.

Expected one of #, else, if, ", ', } at line 80, column 4 (byte 1795) after output {\n if "citrix-netscaler-vor" in [tags]{\n stdout {\n codec => rubydebug\n }\n }"

As you can see, there is a syntax error with your config.
Fix it and try again.