I have a Filebeat process with 4 prospectors.
created a scheduled task which is updating the file for each prospector with current time.
The task runs every 2 minutes.
The data is being sent over to Elasticsearch.
There are times that a huge gap is being created such as 1 hour, 1.5 hours.
The data between the beginning of the Gap and the ending of the Gap, is lost.
Filebeat is running as process in windows machine.
It can be a Gap including all 4 prospectors and a Gap which includes only 1 prospectors of the 4.
So all others are sending the data to the same elasticsearch machines, using the same output.
Need help to solve it.