Splitted huge csv file to less ones - fastest way to indexing using logstash/pipeline or filebeat?

Hi there,
I just have a little question to manage following case:

I have mega huge csv files which I consider to split to less big files in same folder due to the fact
indexing big csv files may take 2 or 4 days of writing.

What I want to know is best practice:

Lets say:
1 big file = 4 days (no way)
Ok lets split into 10 under path => "C:/Folder/Elastic/7.5.0/*.csv"
All using the same config file (so all ended in same index)

What should I use to be fast and easy:
a: logstash pipeline.yml ?
b: filebeat ?
c: 10 time logstash instances ?

I'm familar with logstash and fielebeat and how to do set it up but I have no idea how to logstash pipeline may do this job as well or faster or even how to set up the logstash pipeline.yml!?

Welcome yr suggestions
Regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.