Dynamic Timezone in Logstash csv

Hi there, Im completely new to pretty much anything here and got introduced at my workplace to ELK with the words: "We use ELK here, we have this and this problem, fix it".
So here I am, with little to none knowledge on the matter.


We get log/files from all over the world, meaning that the timezone varies all over the place.
They´ll get put into Logstash with the following csv.

   input {
      file {
        path => "/data/mauser/*.log"
        start_position => "beginning"
        sincedb_path => "/dev/null"

    filter {
      csv {
        separator => "|"
        columns => [ "timestamp", "version", "mainProcessID", "mainThreadID", "currentThreadID", "programmID", "severity", "module", "sessionID", "message" ]
       date {
            match => ["timestamp", "yyyy-MM-dd'T'HH:mm:ss'.'SSS'+02'"]

    output {
      elasticsearch {
        hosts => [ "imagine IP here" ]
        index => "test_logs33"

Example log:

2020-09-02T16:40:20.681+02|1.0.0|00011392|00000001|00000001|p201820903srv|I|Log|b6f584e247f740ad93da672008d80c8c|"Logger started with loglevel Verbose"

The important part here is, that the timezone is hardcoded in the filter with extempting the +02 for it to be recognized as something not to be parsed.
Is there a syntax for a placeholder character so I can just always extempt the last 3 characters from parsing dynamicly?

The +02 in the logs would change depending on the timezone they come from, therefore my current "solution" with hardcoding +02 only works for one static timezone.

If you have any other solution, Im all totally open for it.

mutate { gsub => [ "timestamp", ".{3}$", "" ] }

will delete the last three characters of the [timestamp] field.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.