Logstash in-mem queues and input plugins

Good day, dear forum users!
My logs transfer scheme is as follows:
App servers -> AWS SQS (queue) -> Logstash (input => sqs, output => elasticsearch) -> AWS Elasticsearch

Sometimes Elasticsearch is blocked for writing (for example, due to lack of space) and here I have questions.

Questions:

  1. What happens in Logstash? Do messages get in the in-memory Logstash queue?
  2. Is there a limit to these messages (RAM size/etc)?
  3. Why with a blocked Elasticsearch writing, I see that the SQS queue is growing, and Logstash does not fail?
  4. If it does not fall, then maybe the amount of memory allocated for the JVM heap is filled and then Logstash stops receiving messages?

Help find out the truth. Thank you in advance for your answers.

By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. The size of these in-memory queues is fixed and not configurable.

If an output is blocked then back-pressure will stop the pipelines and then block the inputs. Once the output starts writing again the pipelines will start processing events and the inputs will continue reading.

1 Like