The issue I have is that logstash slowly ramps up in clearning the queue (using Rabbitmq). During a spike of activity the queue in Rabbitmq can grow to very rappidly but it takes minutes for logstash to start increasing the rate to clear the queue.
Here is a reproducible scenario under test conditions:
- Generate spike to increase the queue in Rabbmitmq to 20k events
- Logstash processes its average rate at around 200/sec then after 2 minutes all of a sudden its at +3000/sec
Why is there a delay? It seems like logstash has a long delay in checking with Rabbmitmq to determine if there is lots of events to process. Is this configurable ? I'd appreciate any suggestions.
I reviewed suggestions in https://www.elastic.co/guide/en/logstash/5.5/performance-troubleshooting.html, I've increased the pipeline.workers and pipeline.output.workers but still the issue persists. The other suggestions like jvm heap, CPU issues does not apply to me. Data storage is SSD with plenty of throughput.