During scale in of logstash through HPA data remains in the persistence queue

Problem Statement :-
In our environment we are sending application logs from Fluentd to logstash where persistent queues are enabled. HPA is enabled on the logstash pod so when the load increases so logstash pods scale out and based on load came back to normal.
But we have observed one thing when logstash pods scale in then there is some data remains in the persistence queue.

That logs stuck in the persistence queue until the pod scaleout again. So there is delay in the logs which affects the consistency.

logstash version : 8.4.0

Probable Solution
If the data i the persistence queue is cleared before shutting down. That would be helpful.
We have found one option queue.drain parameter in the below query so that data can be flushed from logstash but even that doesnot help.

#12966

Please refer the logs observed when we try to kill the pod.

[WARN ] 2023-04-03 11:46:44.324 [SIGTERM handler] runner - SIGTERM received. Shutting down.
[INFO ] 2023-04-03 11:46:51.043 [[pMainFluentD]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"pMainFluentD"}
[INFO ] 2023-04-03 11:46:51.705 [Converge PipelineAction::StopAndDelete<pMainFluentD>] pipelinesregistry - Removed pipeline from registry successfully {:pipeline_id=>:pMainFluentD}
[INFO ] 2023-04-03 11:46:54.031 [[pecsTEST]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"pecsTEST"}
[INFO ] 2023-04-03 11:46:54.131 [Converge PipelineAction::StopAndDelete<pecsTEST>] pipelinesregistry - Removed pipelin

Could you please help us if we are missing anything or any usecase which we can try?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.