Full disclosure, just getting started with my first ELK stack setup...
When I use Filebeat to forward syslogs I have some hosts in UTC and some in EST. Once they go into Elastic Search it assumes everything is in UTC which puts timestamp searching out by a few hours. (viewed through Kibana)
Is there a way to configure the Filebeat agent so that it will convert the dates into UTC before submitting to LogStash/Elastic Search? Or is there a preferred way to address this?
do you index right into elasticsearch or via logstash?
filebeat just collects log lines and reports times a line has been read in UTC already. But content of line is not adjusted. If you're using any kind of filter 'grok' in logstash to parse line and timestamp, the timestamp adjustment must be made in logstash.
Maybe you can use beat.hostname, beat.name, type or any other information to filter the source.
I am loading the data via logstash. I'm using the docker appliance sebp/elk. Below is a copy of the syslog.conf on the logstash server.
So I'm gathering that I need to define a new filter type on the logstash server i.e. "syslog_est" and then set the timestamp as EST in that filter? If so, what is the proper syntax of defining the timezone?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.