When the service starts, it shows the following information:
{:timestamp=>"2016-11-15T09:54:24.184000+0000", :message=>"Starting pipeline", :id=>"main", :pipeline_workers=>1, :batch_size=>125, :batch_delay=>5, :max_inflight=>125, :level=>:info}
{:timestamp=>"2016-11-15T09:54:24.185000+0000", :message=>"Pipeline main started"}
Then, it shows nothing new in both logstash.err and logstash.log files. I tried to remove .sincedb file and the monitored folder is keep writing new lines, but still no events found by logstash. What's the problem here?
Sorry for late reply. I'm forced to set discover_interval => 0 as for new file discover time is also related to stat_interval. If I set large discover interval, it won't find new files for a long period.
I set log level to --verbose, the logs didn't show any useful error messages. The logs flooding when set level to --debug. I can't catch useful clues till now.
Indeed, the file input can actually find some files under the given path, but it also can't find some files. Once, I found some files were not caught by the input, I did nothing just run command "service logstash reload", then it worked for the unfound files. After sometime, it can't find part of files again. It's really strange and I have no idea how to resolve it.
Logstash logs which files it finds when it expands the filename patterns. Is that list correct? If you have a large file churn maybe your problem is inode number reuse.
Yes, it's able to find some of files under the filename patterns. I found the .sincedb file had record of a file's inode number, but indeed it didn't found in the output. Dose this phenomenon related to inode number reuse? Why won't file input continue to handle file with the inode number in the .sincedb file?
Yes, it's able to find some of files under the filename patterns.
Some of the files? Or all files?
I found the .sincedb file had record of a file's inode number, but indeed it didn't found in the output. Dose this phenomenon related to inode number reuse? Why won't file input continue to handle file with the inode number in the .sincedb file?
According to the sincedb file Logstash has already processed the file. Logstash's file input isn't able of deleting old entries from sincedb but I think Filebeat is.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.