Convert Timestamp to UTC+0 and keep microseconds

I am having a issue with my messages being out of order when I sort on @timestamp in Elastic Search. This is because @timestamp field does not store microseconds. What I want to do is take the timestamps I am getting from my syslog messages, convert them all to UTC+0, and store this timestamp in a new field called Local_Timestamp. As of right now, I am parsing out a Local_Timestamp, which is the timestamp on my syslog messages, but they are not all the same timezone.

Here is an example of what my timestamps look like:

"Local_Timestamp" : "2017-09-11T23:59:48.740416+00:00"
"Local_Timestamp" : "2017-09-11T20:34:21.310886+05:00"

My logstash grok file looks like this to pull them out:
match => { "message" => "%{TIMESTAMP_ISO8601:Local_Timestamp}}

Any help would be appreciated!

I am starting to play around with the ruby filter in order to get what I want. I have not tested it yet but was wondering if this would work?

ruby {
  init => "require 'date'"
  code => " temp  = DateTime.strptime(event['Local_Timestamp'], "%Y-%m-%dT%H:%M:%S.%L%z")
			event['Local_Timestamp'] = temp.new_offset(0)		
          "
}

I want to get the Local_Timestamp field, parse it to a DateTime object. Then adjust the offset to 0, which should change the timezone to UTC +0. Finally, overwrite the Local_Timstamp filed with the new data. I am thinking since I did not parse it with the TIMESTAMP_ISO8601 grok pattern, it wont be able to sort it like a timestamp.

Could I just try a grok match again just on that single field?

My part of final config looks like this:

grok {
  overwrite => ["message"]
  match => { "message" => "%{TIMESTAMP_ISO8601:Local_Timestamp}" }
}
ruby {
  init => "require 'date'"
  code => "temp  = DateTime.strptime(event.get('Local_Timestamp'), '%Y-%m-%dT%H:%M:%S.%N%:z')
           temp = temp.new_offset(0)
           event.set('Local_Timestamp') = temp.strftime('%FT%T.%6N%:z')
          "
}

From looking at the logstash logs, it does not look like it throws any errors, but just keeps looping through startup. No messages are processed and inserted into Elastic Search with this config. As soon as I take away the ruby, things go back to normal.

Here are some of the logs I am getting:

[2017-09-18T16:56:11,954][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@192.168.1.233:9200/, :path=>"/"}
[2017-09-18T16:56:11,960][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"0.0.0.0:1514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2017-09-18T16:56:11,987][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x2602804b URL:http://elastic:xxxxxx@192.168.1.233:9200/>}
[2017-09-18T16:56:11,988][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-09-18T16:56:12,029][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_a$
[2017-09-18T16:56:12,032][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x35ea56d9 URL://192.168.1.233:9200>]}
[2017-09-18T16:56:12,103][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-09-18T16:57:31,695][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&$
[2017-09-18T16:57:31,699][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}
[2017-09-18T16:57:31,810][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x34f162c9 URL:http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?$
[2017-09-18T16:57:31,811][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x67977d1a URL:http://localhost:9200>]}
[2017-09-18T16:57:31,812][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[2017-09-18T16:57:31,812][INFO ][logstash.pipeline        ] Pipeline .monitoring-logstash started
[2017-09-18T16:57:31,832][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:1514"}
[2017-09-18T16:57:31,841][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>"0.0.0.0:1514"}
[2017-09-18T16:57:31,850][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@192.168.1.233:9200/]}}
[2017-09-18T16:57:31,850][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@192.168.1.233:9200/, :path=>"/"}
[2017-09-18T16:57:31,859][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"0.0.0.0:1514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2017-09-18T16:57:31,881][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0xbf4f38f URL:http://elastic:xxxxxx@192.168.1.233:9200/>}
[2017-09-18T16:57:31,882][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-09-18T16:57:31,924][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_a$
[2017-09-18T16:57:31,927][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x49bbcdf1 URL://192.168.1.233:9200>]}
[2017-09-18T16:57:31,995][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-09-18T16:58:39,519][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&$
[2017-09-18T16:58:39,523][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}
[2017-09-18T16:58:39,626][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x5309fd29 URL:http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?$
[2017-09-18T16:58:39,628][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x4280dd97 URL:http://localhost:9200>]}
[2017-09-18T16:58:39,629][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[2017-09-18T16:58:39,629][INFO ][logstash.pipeline        ] Pipeline .monitoring-logstash started
[2017-09-18T16:58:39,645][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:1514"}
[2017-09-18T16:58:39,653][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>"0.0.0.0:1514"}
[2017-09-18T16:58:39,661][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@192.168.1.233:9200/]}}
[2017-09-18T16:58:39,661][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@192.168.1.233:9200/, :path=>"/"}
[2017-09-18T16:58:39,678][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"0.0.0.0:1514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}

If I could get some help that would be great!

I hit a road block here on my ruby filter. Can anyone help me with this?

If you are OK with millisecond precision then I think this works

    date {
            match => ["Local_Timestamp", "YYYY-MM-dd'T'HH:mm:ss.SSSSSSZ"]
            timezone => "Etc/GMT"
    }
1 Like

I had that thought as well. Use the date filter to convert the Local_Timestamp to the correct timezone, but I still would need the microseconds. I know its pretty hacky, but do you know away to just extract the microseconds from the timestamp? I could store them in a separate field then sort on @timestamp then microseconds. Either way I need to keep the microseconds.

Do you have any other ideas?

If you cannot live without those last 3 digits of precision, then you can put them in another field using this:

    grok {
            match => { "Local_Timestamp" => "[0-9]{3}(?<microsecs>[0-9]{3})[+-]" }
    }
1 Like

@Badger, that for that regex. Maybe I am dreaming, but I think I saw some of my timestamps be truncated. So they would look like this on a rare occasion:

"Local_Timestamp" : "2017-09-11T23:59:48.7434+00:00"
"Local_Timestamp" : "2017-09-11T20:34:21.310886+05:00"

The first timestamp does not include 3 digits for the microseconds. I think it might be due to trailing zeros. Which is kind of annoying.

Got any suggestions for accounting for that? If I somehow get microsecs, I would need to append trailing zeros in order for it to sort right. Maybe use the mutate filter? Could the regex you provided be updated to grab zero or any amount of digits until you see + or -?

Okay, I must of been dreaming. I looked through my logs and even if they had trailing zeros, I was fine. The only other question I have left is will the date filter truncate or round the timestamp with microseconds.

So if I had a timestamp like:

2017-09-11T20:34:21.318886+00:00

Would it round it to 2017-09-11T20:34:21.319Z or would it just truncate to 2017-09-11T20:34:21.318Z ?

In that case you could use this:

    grok { match => { "Local_Timestamp" => "(?<fractionalsec>\.[0-9]{1,6})[+-]" } }
    mutate { convert => { "fractionalsec" => "float" } }

Note that the milliseconds then occur both in your @timestamp and in fractionalsec, but if you are just sorting then you do not care about that.

1 Like

@Badger, your amazing. Legit, I was getting so frustrated with the ELK stack and I think this might save me. Let me give that a go, and see if it works. I'll close the post if it does. Thank you so much!

@Badger, you managed to solve it! Sorting on @timestamp, which has been converted to ETC/UTC, and Fractional_Sec seems to sort it in the order I want it. I can't quite seem to use the URL search but using the python package gives me the results I need.

The URL query I was using was this:

_search?q=*&pretty&sort=@timestamp:asc&sort=Fractional_Sec=asc&size=10000

Seems to only sort on one.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.