Filebeat failed to insert too much log

Hi, It's look like my filebeats agents are failed to insert some logs into elasticsearch.
I have multiple filebeats and 3 logstash and 3 clustered elasticsearch (1 shard , 2 replica).
I see some data exists in nginx logs but misses in kibana, when I restart my filebeats and logstash this happen:

It's start to insert logs that was missed, based on timestamps it's inserting logs for some hours ago.
Here an example, the number of nginx logs for yesterday is 1,340,918 but number of docs in elastcsearch for that agent is 214,000. it's look like I'm missing some data everyday.
I didn't config any queue or anything else but logpath and logstash endpoin in filebeat config file.
I'm confused, Can someone help me with this ? thanx a lot
(3 elasticsearch node 8 GB JVM, 16 GB Memory on VM )
(3 logstash with 1500 MB JVM)

Did you check your filebeat instances for error logs?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.