If it retries, it is generally not clear whether the initial record reached the destination or not. Filebeat can provide additional metadata around the event, e.g. filename and offset in file, that you could use rather than the timestamp.
filename, offset + beat (shipper name) are a good source for deduplication. One trick (when sending to logstash) is to build an id based on these fields and just re-index the document. I think re-indexing in elasticsearch will mark the old entry deleted and create a new one (right, takes some disk space + CPU usage, but on compaction deleted entries are finally removed from disk). It's a very simple trick to implement deduplication.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.