Logstash sizing for a large environment

Hello everyone, i'm trying to setup an ELK stack that has to receive approximately 4.5TB of events per day, with peaks of 100k EPS during working hours.
The collection must be done using logstash and all the events will be shipped in JSON format over syslog protocol.
Considering that, probably will not be needed any kind of filter or transformation by logstash, we'll implement just a simple Syslog Input / Elasticsearch Output.
Can you please help me to size the Logstash dedicated nodes suggesting me the nodes number, their specifications and configurations like pipelines, workers, heap etc..?
Thanks in advance!
Best Regards

I think that the best way to find the optime size for your Logstash nodes is by testing, doing a proof of concept and the scale as needed.

But there are some things that you need to consider.

First, your logstash performance will depend on your elasticsearch performance, if your Elasticsearch cluster cannot keep up with the event rate sent by your Logstash nodes, it will tell Logstash to back-off a little.

When this happen, Logstash will apply back-pressure on the input and for some inputs this can lead do events being dropped, syslog (tcp/udp) is one of these cases.

To deal with this you need to buffer your events, you can do that using persistent queues in Logstash, which will write the events into disk before processing, but this can add some latency and performance issues as you will probably need big ans fast disks and every event needs to be written and read from the disk.

Another option is to use Kafka as a message buffer.

I'm close to 60k events/s and what I like to do is to use 2 layers of Logstash, Kafka and Load Balancers.

The first logstash layer is used to just receive the logs and put it into Kafka topics, the second one just reads from Kafka, process the events and send to Elasticsearch.

Basically I have something like this:

Data Source -> Load Balancer -> Logstash nodes acting as Kakfa Producers -> Kafka Cluster <- Logstash nodes acting as Kafka Consumers -> Elasticsearch.

Since you said that you do not plan to have filters or transform data in Logstash, you may not even need Logstash and could use Filebeat/Elastic Agent to do this.

Hello leandro, thanks for your reply, we've planned our Elasticsearch cluster with 3 dedicated master nodes, 6 data nodes (64GB ram each), 3 coordinator nodes that should be enough to obtain what we need. We don't have HA needings, neither particular retention needings.
We have to use logstash due to some requirements from the vendor...unfortunately we can't change it. That's why i need some help for his sizing...

To add on this approach, in the flow I managed we opted to split the second logstash layer in two. One for our dev environment(s) and one for production. With the scaling and configurations we achieved that in case of any pressure, this would first hit the dev environment and prd would keep going.

Logstash has plenty of "dials" which you can play around with to tune performance. It really depends on what you are trying to achieve.