Our use case is as follows:
- ELK target version: 6.8.2
- Data comes in via Logstash, where it gets parsed and enriched
- All the data needs to go to Elasticsearch (we use the
- Some of the data needs to be sent to an external application that performs additional analysis, and for that we use the
rabbitmqoutput to pass data to RabbitMQ
- The external application consumes from RabbitMQ and does its thing
Other important things to consider:
- RabbitMQ queues are all limited (e.g. 100 MB), as this is an unsupervised installation
- RabbitMQ overflow policy is set to
reject-publish, as we don't want to lose any data item that has been enqueued
- It is OK for us for RabbitMQ to put back-pressure on Logstash - in fact, this is the desired behavior: if the RabbitMQ queue is full, the whole Logstash pipeline slows down
This does not seem to be possible with the current
rabbitmq output plugin, as it does not wait for ACK (or NACK, in this case) from RabbitMQ: it keeps sending data to RabbitMQ, ignoring its answers.
I see there is an
ack option in the RabbitMQ input (see here); however, it doesn't seem to be anything similar in the output plugin.
I know setting this option will hurt throughput, but this is exactly our use case: we want to lower throughput when the queue is full, and enter an infinite retry loop to re-publish rejected messages.
We tried using a custom
ruby filter to obtain the behavior, and we successfully did it by adding an explicit call to
channel.wait_for_confirms after the call to
Am I missing something? Is this a potential welcome addition to the RabbitMQ output plugin?