Retry on failure mechanism in logstash-bigquery filter plugin

Recently, I encountered an issue where events uploads to BigQuery from Logstash failed with a BigQueryException: The service is currently unavailable
error. My concern is that valid events might fail due to BigQuery service unavailability. In this scenario, does Logstash not automatically retry sending the failed events? While I understand that these events are stored in the error_directory, I have noticed that the Logstash-BigQuery pipeline lacks a built-in retry mechanism. Is there any way to implement a retry mechanism within the BigQuery filter for failed events? Does Logstash have any internal retry mechanism for handling such failures?

The documentation says you can "manually fix the events" and then (presumably) re-route them back through the pipeline. logstash is not going to handle that internally.

Hi @Deena_Dayalan, You can use Dead letter queues, The dead letter queue (DLQ) is designed as a place to temporarily write events that cannot be processed. Later you can process all the events from DLQ.

Thanks @Badger @ashishtiwari1993. I see the setting "error_directory" itself act as kinda DLQ in BQ output plugin.

No, you cannot. The elasticsearch output uses a DLQ because the code invokes the dlq writer. (It may be the only output plugin that does so.) The equivalent code in the bigquery output writes to the error directory.

That method in the bigquery output could be modified to use a DLQ, but it does not do so as-is.