Incorrect feed in current logstash index

Folks,

We have one master node and 2 cluster node which configured on AWS EFS volume end(data1, data2 and data3) . The current version of ELK 6.8.

We have configured the below Filebeat for feeding IIS server log,

filebeat.prospectors:

  • type: log
    paths:
    • /cust/local/ELK_Input_Logs/*.log
      output.logstash:
      hosts: ["localhost:5044"]
      setup.kibana:
      host: "http://x.y.z:5602"
      logging.level: info
      logging.to_files: true
      logging.files:
      path: /var/log/filebeat
      name: filebeat
      keepfiles: 7
      permissions: 0644

Also, configured the below Logstash Index under /etc/logstash/conf.d,

output {
elasticsearch {
hosts => [ "x.y.z:9200", "a.b.c:9200", "d.e.f:9200"]
index => "logstash-prod-iis-%{+yyyy-MM-dd}"
}

On everyday, the respective Live log files has been placed on above mentioned file beat path location and then processing this by logstash properly with creating new index for the day. We are also deleting old filebeat log folder files on everyday before processing a new log for the day. So, keeping fresh Live Log files on filebeat log folder.

Problem : While checking the respective logstash index by CURL query (logstash-prod-iis-2019-07-08), all the records seems good with old data rather than current placing log files. It means that the log files having past records from 1st Jul to 8th Jul log files. We checked the Filebeat Log folders files only active Live log for the day (8th Jul .log).
Does means ELK server keeping old log files in queue and parse/feed to logstash for the day index?

If so, how to avoid past data and keep active current filebeat Live log placing files only.
please note that, few months back everything was working fine in ELK version 6.6. The issue observed recently after migrating to ELK 6.8 but not sure coz of migration higher version. Pls. help us to avoid such a incorrect log feed.

Let me know if any query /unclear problem definition.

You should not run Elasticsearch on networked storage like EFS as it can be slow and cause stability problems. Have you verified that Elasticsearch is able to keep up with the data coming in with this configuration?

Thanks your quick response. All ELK services are running on respective server storage alone but cluster data configured in EFS end. Does this also impact ? Yes, we can see slow performance for upload/fetch data now.

Any clue how to verify that Elasticsearch are keep up data coming by any syntax. Based on observations, all data coming hence data count and log capturing on everday. But problem, not sure how old data also processing here.

Elasticsearch should not store any data on EFS - use EBS instead.

Was the reason behind of old server log files feed to for the day logstash index end? Coz past 3 months we didnt face that kind of issues. Not sure why suddenly the wrong/old data feeding

it would be helpful if you provide suggestion

Just want to add an additional details, we have 16 GB RAM. From which, 8 GB RAM allocated to Elastic-search and 2 GB for Log-stash end. Hopefully, system will take 4 GB for process and rest 2 GB is Idle. Hence it would be better. Pls. suggest if any to alter/improve RAM further since we are uploading log around 7 to 10 GB per day

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.