Logstash Output to Elasticsearch - Try Once Logic

I have a cluster that processes around 200k netflow records every second. Due to this volume of throughput, the ES cluster intermittently gets behind on indexing, returning a 503. This increases the pipeline latency, as Logstash keeps trying to send the same data and more data just backs up behind it, thus exacerbating the issue. Furthermore, I have a kafka output plugin that never gets this data while ES is trying to catch up.

Is there a way to get Logstash to just try once to index a document and then just move on? I've tried the following settings to no avail:

output {
  elasticsearch {
    id                     => "my_unique_id"
    hosts                  => [  <my ES nodes> ]
    retry_initial_interval => 0
    retry_max_interval     => 0
    retry_on_conflict      => 0
    index                  => "busy_index"
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.