Folks,
We have one master node and 2 cluster node which configured on AWS EFS volume end(data1, data2 and data3) . The current version of ELK 6.8.
We have configured the below Filebeat for feeding IIS server log,
filebeat.prospectors:
- type: log
paths:- /cust/local/ELK_Input_Logs/*.log
output.logstash:
hosts: ["localhost:5044"]
setup.kibana:
host: "http://x.y.z:5602"
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
- /cust/local/ELK_Input_Logs/*.log
Also, configured the below Logstash Index under /etc/logstash/conf.d,
output {
elasticsearch {
hosts => [ "x.y.z:9200", "a.b.c:9200", "d.e.f:9200"]
index => "logstash-prod-iis-%{+yyyy-MM-dd}"
}
On everyday, the respective Live log files has been placed on above mentioned file beat path location and then processing this by logstash properly with creating new index for the day. We are also deleting old filebeat log folder files on everyday before processing a new log for the day. So, keeping fresh Live Log files on filebeat log folder.
Problem : While checking the respective logstash index by CURL query (logstash-prod-iis-2019-07-08), all the records seems good with old data rather than current placing log files. It means that the log files having past records from 1st Jul to 8th Jul log files. We checked the Filebeat Log folders files only active Live log for the day (8th Jul .log).
Does means ELK server keeping old log files in queue and parse/feed to logstash for the day index?
If so, how to avoid past data and keep active current filebeat Live log placing files only.
please note that, few months back everything was working fine in ELK version 6.6. The issue observed recently after migrating to ELK 6.8 but not sure coz of migration higher version. Pls. help us to avoid such a incorrect log feed.
Let me know if any query /unclear problem definition.