You could maybe have shared this a bit earlier in the thread .... ?
There's other ways you could de-duplicate your data before ingest, could even be done in logtsash itself, see
But would add significant complexity.
Someone recently had similar issue and was using action => "create" the first time it saw docX, then getting (expected) errors on further create (not update) requests for same docX (same _id), and wanted to squash the error (or was it warning?) messages. This might be quicker.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.