New index file for every log file getting transferred to ElasticSearch

We use elasticsearch for the EMC Networker Log analysis.. So we transfer Networker server daemon logs to ES.. These are log files are generally huge in size... So for the better analysis perspective, we need new indices for every log that we transfer to Elastic-search.
How to achieve this ?

How large are the files? How many are generated per day?

Please be aware that having lots of small indices and shards is inefficient and can cause performance problems.

File size couldvary from 500MB to 40+GB ... mostly 4 to 5 logs everyday.

I am fully aware bout the possible performance issues.
As of now our focus is to analyze micro details of each log files and single index set for multiple log files would create confusion hence we wanted solution that way.

Why not just extract the file name into a separate field and let the user filter on this?

What does your ingest architecture look like?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.