Standard rules for ELK handling big data?

what are the standard rules for using ELK handling big ( millions ) data?

Millions may well still be a single machine.

Billions uses multiple machines, splitting a single index into shards.
A never ending-stream of new content is managed using multiple time-based indices - one per day or week perhaps and deleting old indices beyond the retention period. The retention period is whatever is legal or what you decide is the crossover between potentially interesting content and the economic viability of keeping it around.

1 Like

thanks man !!!!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.