Our problem: after having stopped the Logstash process for modification purpose in the .conf file, we started the process again but noticed that log files already parsed are treated again; we are using a sincedb file which is not corrupted. The process was working well since a few months until now.
It's the second time we are facing such a problem. Last time, we had to reindex all log files.
Questions:
does anybody already encountered such a problem?
if yes, what did you do to come back to a normal situation? (to avoid parsing again all log files)
I know that there have been some issues reading from network volumes in the past, but am not sure what you are experiencing could be attributed to that. There seems to have been some improvements added in Logstash 6.4, but I will have to leave it to someone more knowledgeable about this to comment further.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.