Logstash parallel processing

Hi, I'm developing a POC in a costumer to process huge amount of csv files. So far I ingested more than 100mi of documents. I need more throughput on the processing the files and ingesting on elastic. I'm using docker and created the image of logstash with mapped volumes (logstash_pipeline and logstash_config). I would like to know if is possible to run two instances of logstash with the same volumes (and configs/pipelines) so I would double my capacity of reading/processing my csv files. Don't know if the two instances would have problems managing the sincedb or if they would try to read the same files... Any suggestions, ideias or recommendations? Thanks!!

If you configure two logstash instances to read the same files then they will both read the same files. It would double your workload, not your capacity.

1 Like

Thanks for your reply. I was in doubt if Logstash was able to work like this. I'm also working with NiFi so I used a Distributed load processor, create two pipelines looking for different directories and now I have unique files on those directories. It's working fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.