Does logstash drop Events

How I will make sure that all events that are coming to logstash is going out and there is no dropping events in logstash queue? I mean I am not losing events in logstash and all events are successfully emitted to elasticsearch.

You cannot 100% guarantee no lost messages unless you have infinite disk space to queue them on.

If an output cannot keep up then there is a queue between the filters and the output. This is an in-memory queue by default, but you can use persistent (disk-based) queues.

If the queue in front of the output fills then data is queued between the input and the filters. If that fills then the inputs stop ingesting data. If your input is something like UDP then it will start dropping data. If it is TCP based then it will allow the TCP window to fill, and back-pressure shifts to the sender. This would apply to the many TCP-based inputs. kafka, for example, will queue data, but does drop it eventually.

Where can i check the size of these two queues. The queue between filters and output and between input and filter?

Not sure of the size of the in-memory queue, but it is not configurable.

Can I check how much is used by running for example specific API command

I do not know of such an API call.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.