How to prevent Data Loss on Multipipeline Configuration

Hi all,
I have configured different pipelines (managed with a multipipeline configuration). Often, the one with the most data doesn't load all of them.
For that I have added a specific DLQ . I also have enabled persistent queues. I have added RAM to the Logstash JVM.
Now my configuration is the following:

immagine

So I have this trend in Discover visualization:


You can notice easily some partial data leap.

We're talking about 100.000 strings of 4.000 bytes per string. In total almost 400.000 bytes. But if I add the data manually with small batches, consequently I load all of them.
Based on this estimation, I would like to understand how to better size the RAM.
Can you suggest me any specific formula o something similar?

Thanks.

Marco

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.