Filebeats not rescanning if Elasticsearch index deleted

Hi People,

Apologies if this has been asked, I did search but couldn't find this issue.

I am currently testing a build of the ELK stack to monitor some of our servers. I have decided to use Beats for the log shipping due to awful nature of the multiple logs I need to ship.

I configured Logstash and ElasticSearch with Kibana and all are up and running.

I have installed Beats on one of the servers and when first set up it worked fine and scanned the files, these were then sent to Logstash and onto Elasticsearch.

Due to the testing process I am trying to sort out the timestamp and therefore deleted the indices from Elasticsearch with the idea to rescan.

I need to recreate the index on Elasticsearch but Beats is now refusing to send any data to Logstash. Even if I copy new unscanned log files into the prospector directory it sends nothing and no new index is created. Surely new files (read unscanned) should be parsed?

I have stopped and restarted Beats and can see through Process Monitor that Beats is indeed scanning the files.

What am I missing? The issue seems to be with Beats deciding whether or not (not in this case) to send the data from the parsed log files.

Any help would be appreciated.


I feel a little stupid now :frowning:

I switched on debugging in the filebeat.yml file and saw it was having issues reaching the logstash.

2016-05-13T09:23:04+01:00 INFO Connecting error publishing events (retrying): dial tcp x.x.x.x:5044: connectex: No connection could be made because the target machine actively refused it.
2016-05-13T09:23:04+01:00 INFO send fail

Turned out the service had died (was running (exited) so wasn't responding. A quick reboot and it was back.


@James_Tighe Glad you found the issue. If you want filebeat to re-read all files, you need to delete the registry file of filebat and restart filebeat. Of course, new files should always be detected.

Brilliant that is exactly what I needed. Knowing I can do that helps a lot.