I have seen a few older solutions for HA with logstash, and wonder if they hold true still. The official documentation on scaling up logstash (https://www.elastic.co/guide/en/logstash/current/deploying-and-scaling.html) shows multiple queues, and/or multiple shippers, but only one indexing instance. Near the end it states
Alternately, increase the Elasticsearch cluster’s rate of data consumption by adding more Logstash indexing instances.
However there are no suggestions as to how best to do this. If I have, say, 10 clients all shipping logs, do I simply use the load balancing config flag in filebeat? Must I have a separate tier for shipping logs to another logstash layer/redis layer?
Any recommendations would be great. Resources to use are relatively slim, but log ingestion rate on our current single node stack is around 800 events/second but we'd be looking to scale that up possibly by up to 10x over the coming months, so a degree of breathing room would be helpful.
In any case, this is more about HA than load balancing, so whatever you think is best would be interesting to read. Thanks in advance.