Make Logstash drop documents on 403

Greetings

Recently I started using forcemerge on my old indices. However, I found out that occasionally, Logstash writes into the older indices, increasing the segment count, so the curator has to merge them again on the next day. To prevent this, I now switch older indices to read-only just before merging.
However, now when I look at the Logstash logs, there is a lot of entries like

[2018-09-26T11:59:47,219][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/8/index write (api)];"})

Is there a way to tell Logstash to drop the documents which receive 403?

Thanks

Alternatively, is there a way to drop documents already in the Logstash queue with the @timestamp field older than 1 day?

You're going to want to go with the alternative, because Logstash will never just drop documents which yield a 403 error.

There are likely a few timestamp comparison examples here in the discussion forums. That's really all you should need.

Thanks for the information. Should a feature request be submitted to Logstash to drop documents on 403, or at least place them into DLQ, instead of polluting the output queue? I don't think that it's too specific for my scenario, as the only way to fix the 403 is to either give Logstash user correct privileges, or make the relevant indices writable. Neither of these things is resolved by simply retrying, so placing these documents into DLQ seems reasonable to me.

Error 403: "The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated.".

It's not about privileges. It's that Elasticsearch is flat out refusing to do anything with the message.

Maybe it is a good idea to enable DLQ for this. Maybe it already is. Have you checked? The DLQ feature in Logstash has to be enabled. It isn't on by default.

I have enabled the DLQ, but it wasn't used in this case.

And yes, the article you linked confirms that the same request (in this case writing to an old index) should not be re-attempted after 403

Sounds like a good feature request, then.

Could you please point me at one? I seem unable to find any that would fit my situation. So I asked a new question, but no one is answering that.

I found a way:

ruby {
  init => "require 'time'"
  code => 'if LogStash::Timestamp.new(event.get("@timestamp")+432000) < ( LogStash::Timestamp.now)
    event.cancel
  end'
}
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.