Detect filebeat retries to remove duplicates in the server side

filename, offset + beat (shipper name) are a good source for deduplication. One trick (when sending to logstash) is to build an id based on these fields and just re-index the document. I think re-indexing in elasticsearch will mark the old entry deleted and create a new one (right, takes some disk space + CPU usage, but on compaction deleted entries are finally removed from disk). It's a very simple trick to implement deduplication.