Need help tracking down a time adjustment issue


#1

Hey, guys. Thanks for reading. I have logs coming from a Windows event, pulled in by logstash-forwarder and sent to Logstash, where it is then sent to Elasticsearch 2.0.

I have an input as follows:

 tcp {
                codec => json_lines
                port => 5100
                type => "eventlog"
        }

And a filter:

 grok {
                        match => [ "Message", "TransactionID=%{UUID:TransactionID},TransactionRole=%{WORD:TransactionRole},TransactionType=%{WORD:TransactionType},ModalityName=%{WORD:ModalityName},SystemName=%{WORD:SystemName},RequestTimestamp=%{TIMESTAMP_ISO8601:RequestTimestamp},Duration=%{BASE10NUM:Duration}" ]
}

And an output:

 elasticsearch {
                        hosts    => "10.xxx.xxx.xxx"
                        index => "logstash-myindex-%{+YYYY.MM.dd}"
                }

An example message looks like:

TransactionID=placeholder,TransactionRole=Server,TransactionType=ITI_8,ModalityName=placeholder,SystemName=placeholder,RequestTimestamp=2016-01-19T09:08:28,Duration=171.6033

This creates a separate date field (besides @timestamp) named RequestTimestamp with "2016-01-19T09:08:28." However, in Kibana 4, RequestTimestamp shows up as "January 19th 2016, 03:08:28.000," even with the timezone setting is on "browser." Also in Kibana, the original time in the message shows "2016-01-19T09:08:28." So my question is, where is the offset getting applied and how do I compensate? It looks like it's being converted to UTC twice...


(Mark Walkom) #2

LS assumes UTC by default, as does ES.
KB reads that (so assumes UTC) but converts to local to display it.

Are your logs coming in as UTC?


(system) #3