Assuming some standard hardware, like say, a 16 core CPU and 64 GB RAM, can we come up with some statistics that tell us the maximum indices that can be gracefully supported.
Assuming every index has 5 shards.
The size and counts will be variable per index, however, assuming ~15M documents for a total size of ~30-50 GB.
Is it safe to believe that a smaller consolidated number of indices holding equivalent aggregate data of a large number of indices would perform better? Seems the indexing performance may benefit, but does it also apply to real-time search performance?
What are the prohibiting factors that limit performance when an unreasonable number of indices are created per node? I believe Index Management is a big deal for an EL node.