[ELK]logstash default timezone cause index splitting problem in different timezones

I'm a Chinese developer, our timezone is +08:00,the problem using logstash is that @timestamp is always formatted as @timestamp" => "2015-07-25T16:00:30.000Z, the input time is 2015-07-26 00:00:30. This problem will cause 1 day log to be spliced to two indexes:logstash-2015.07.25 and logstash-2015-07.26

I tried to fix it by add logged_date field to represent 2015-07-26 in +08:00 timezone, however, in kibana, all the date fields will be added 08:00 hours, which causes incorrect logged_date in Chinese timezone

Could anyone give me a solution on this problem? I googled around and found no proper solution.

I've read the user guide for logstash date filter, the timezone parameter for date filter is used to parse input log time not for output, so it cannot be used to change the output @timestamp timezone

ES and LS use UTC as much as possible to make it standardised.

If you are using KB then it shouldn't be a problem, as it will change the times to the TZ of the browser.

yes, es and logstash use utc 00:00 is ok for kibana, but logstash elasticsearch output plugin use @timestamp to format the index_name, as a result, one day log in +08:00 timezone will be indexed into two different indexes, which is confused for us in +08:00 timezone,although, kibana will fix this problem when displaying query result

Again, this is by design and nothing you should try to "fix".

Hi, I want to know your solution for this. We have also the problem. As known to us all that logstash will use UTC and kibana automatically converts the timezone to user's browsers local time, which is fine.

But we also want to use filebeat and logstash, instead of parsing and sending data to elasticsearch, we just want to keep the log files and spliced them in server's local time, and use our customized scripts to parse the log files.

So, is there any way to configure the logstash settings to generating log files in server time instead of in UTC?

Nope.

OK, but personally speaking, I think using logstash and filebeat to keep the log splited in localtime is a necessity.

As my experience,
i have several jdbc input in the logstash conf.d file, and I use the config for one stanza like below:

jdbc { #4
jdbc_driver_library => "/opt/elasticsearch-jdbc-2.3.4.0/lib/mysql-connector-java-5.1.38.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://192.168.xxx.xxx:3306/myschema"
jdbc_user => "xxxxxx"
jdbc_password => "xxxxxxx"
jdbc_paging_enabled => "true"
jdbc_page_size => 50000
schedule => "*/15 * * * *"
last_run_metadata_path => "/data/metadata/myindex_last_run.txt"
jdbc_default_timezone => "UTC"
statement => "select * from sales where sales_date > :sql_last_value"
type => "my_type"
}

the myindex_last_run.txt as
--- 2017-01-23 18:30:00.297000000 Z

Before I use jdbc_default_timezone => "UTC", my query will follow the myindex_last_run.txt that stored in UTC, so the log said that:
select * from sales where sales_date > '2017-01-23 18:30:00' <== Wrong

After I use the jdbc_default_timezone => "UTC" then my query change to
select * from sales where sales_date > '2017-01-24 01:30:00' <== right value

I'm frustrated for several weeks and just remove the document that doubled by the query when I restart the logstash.
Now, I can sleep well with this configuration.... :slight_smile:

That's my 50 cents...

Thanks...

Fadjar Tandabawana

你问题解决了吗?

没有解决

kibana默认的时区是UTC 所以在kibana页面上看貌似是logstash在入ES时把一天的数据分到两个索引里,但其实统计的时候不是这样的,你需要在kibana的设置里设置下浏览器时区就可以显示对了

比如你如21号的数据 ,在kibana里看确实前8个小时在20号的索引里,但其实统计不是那个样子的 不过logstash在filter的时候需要定义好与日志时间一直的时间戳即可

Folks, please use English here. There's a Chinese group available if you want to post in Chinese.

Ok, sorry for any inconveniences caused.

Not really.

Kibana converts the UTC based data to client browser's timezone, which is pretty fine when using the ES + Kibana + Logstash stack.

What I am asking about is how to change the logstash's default timezone to Beijing Time, then use file plugin to ship data to write to disk for storage only, instead of into ES.

oh sorry