Slow Ingestion of Final Log Chunks (Filebeat + Logstash + Elasticsearch)

You could maybe have shared this a bit earlier in the thread .... ?

There's other ways you could de-duplicate your data before ingest, could even be done in logtsash itself, see

But would add significant complexity.

Someone recently had similar issue and was using action => "create" the first time it saw docX, then getting (expected) errors on further create (not update) requests for same docX (same _id), and wanted to squash the error (or was it warning?) messages. This might be quicker.