Logstash file input stopped picking up file changes

I am using logstash to process a large number of log files (over 1000 files in total). After the initial setup, all log files were processed successfully and logged into elasticsearch. This took just over a day. Then new entries were being picked up for a while - until they just stopped. The service was not restarted, the files were not rolled in any way - seemingly, filewatcher stopped picking up file changes. My config contains the following input:

file {
  path => "/mnt/logs/**/console-*.log"
  type => "tomcat"
  start_position => "beginning"
  ...
}

I set log level to TRACE for everything and restarted logstash. Among the output, I can see entries from filewatch about opening sincedb and reading its entries - but nothing after that - it's been about 30 minutes since and the log files have thousands of new entries added in this time.

The log file (with TRACE log level) has only this for the past 30 minutes (every 5 seconds):

[2020-01-10T10:23:38,633][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-01-10T10:23:38,635][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-01-10T10:23:38,960][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.

How can I make it pick up new changes?

I have the latest version of logstash/elasticsearch - version 7.5.1 - running on CentOS 7.7. The log files are in Azure Storage Account file shares, mounted locally using SMB. If I look directly on the ELK server, I do see the new entries in the log files.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.