Filebeat and Duplicate entries

I need my Filebeat process to identify duplicate entries and update, not recreate, based on this.
This would be similar to the document id in logstash.

Is this available using Filebeat?

My filebeat process passes the data to a pipeline....could it be done at this point?

Does this help? https://www.elastic.co/blog/efficient-duplicate-prevention-for-event-based-data-in-elasticsearch

Thanks, but this solution is based on logstash.

My solution does not include logstash...it is a filebeat/pipeline process.

Garry

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.