Using RabbitMq as broker between Beats and Logstash

This depends on the volume of data you have and for how many time you want for this data to be available in the broker.

For most of cases there is basically zero need for tuning Kafka or Logstash, just keep in mind to use more than one partition on Kafka side, ideally try to have the number of partitions equal to the number of logstash nodes.

You need to start small and improve as needed, it is pretty common to see people trying to start with the optimal configuration for all tools, which is a mistake in my opinion.

I want to create some scenarios and choose the optimal one for my use case because I have some configurations, but expertise plays an important role for this type of configurations.

Ok, i will start by the number of partitions equal to the number of logstash nodes and see the performance.
Thank you @leandrojmp

Hi there,

Have you thought about using Redis? We sit Redis between our Filebeat servers and Logstash instances.

We have a bank of Redis single instances that we load balance over from Filebeat. We set-up our Logstash servers with the Redis Logstash input set-up to read from any of the Redis instances.

For us it works really well and supports the ingestion of around 2 billion loglines a day.

Hello @intrepid1,

But is Redis scalable in case the system becomes bigger? Can you explain more about how you did it?

Keep in mind that if you use the Elastic Agent you cannot send logs to Redis, it is not a supported output.

With the Elastic Agent only Logstash, Elasticsearch and Kafka are supported outputs.

1 Like

This is how we configure it, but please bear in mind Leandro's last points.

We use a bank of single instance Redis. In the example here, we have three Redis instances. We configure Filebeat to round robin to our Redis instances as follows:

output.redis:

  hosts: ["server1","server2","server3"]

When we configure Logstash, is configured as follows:

input {
        redis {
                host => "server1"
                port => "6379"
                codec => "json"
                data_type => "list"
                key => "list_name"
                threads => 8
        }
        redis {
                host => "server2"
                port => "6379"
                codec => "json"
                data_type => "list"
                key => "list_name"
                threads => 8
        }
        redis {
                host => "server3"
                port => "6379"
                codec => "json"
                data_type => "list"
                key => "list_name"
                threads => 8
        }
}

If you want to scale out, you introduce a new Redis instance and then update the Filebeat and Logstash configurations appropriately.

1 Like

Thank you @intrepid1.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.