Filebeat/Logstash and duplicate indexes

Hi all

Question about Log stash and filebeat , at my work I sync or copy Apache log file from amazon S3 bucket (but the issue is the same if i add data to the local file manually ), this step is with filebeat that transfer the apache log data to Logstash (and from logstash to elastic) , the first time that i transfer the file from filebeat to logstash it's working great ,but when i add data to the file by manually add lines or by sync form s3 the indexes duplicate itself , meaning lets say that in kibana i see 100,000 (lines) ,and only add few lines 10 for example to the log file i will see 200010 instead of 100010.

help please :slightly_smiling_face:
thanks

Logstash and Filebeat keep track of files through their node and assume data will be appended to them. If you copy over a file from S3 or manually add lines through an editor, a new file (as node will change) with the same name is created, and Logstash and Filebeat will see this as a new file and process it entirely.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.