Well, I am less familar with Logstash 5.0 but been using it for a long time.
the "Queuing" really depends on your unique needs. To go in to a little bit more explanation here is some info to chew on
I do between 8k to 20k messages a second. So I have logstash write to "Kafka" first and then have an "Indexing" logstash read from the que and write to Elasticsearch.
Here is the flow
file -> logstash -> kafka -> Logstash indexer -> Elastic
Now I use Kafka which is benchmarked at 100K messages/s where as RabitMQ is I think at like 20K/s I opted for kafka
I used to use redis and that worked just as good but had to do some tunning to get it working the way on wanted where kafka worked the default way (Minus the learning curve)
so with all that technical stuff here is how I would choose
very low volume and very low critical messages no queue is needed and keeps the architecture simple
High volume or No loss of messages Redis or any other tech would be nice
2a Kafka and I believe RabitMQ can be configured to "Replay" old messages if you have lost data or want to rebuild you index.
Finally: I do recommend a Queue System because it allows you to stop your Elasticsearch cluster or Indexer servers at anytime and never have to touch your Logstash receiver/Beats. This is because the will Queue up in Kafka/Rabit/Redis until you start your indexing Logstash instances again.
This of it this way, I have 400 Servers if I had to stop Filebeat on all of them and start them later. thats a pain and chances of something getting missed or lost might be high.
But with the Queuing system, if I want to do work on Elasticsearch or the Indexing Nodes, I can at anytime. The messages will just be queued up.