I am relatively new to Elasticsearch and building an ES cluster for customers. The way I am designing the cluster is that each customer would get an index of their own which they will write to and read from.
Problem is different customers have different amounts of data. For example, one would have 200 documents, another with 45k documents and another with 20+ million documents. Initially, I am planning to manually create the right amount of shards for each customer (each index) based on the storage required.
However, in the long run I would like to automate the amount of shards required. The way I am thinking is to use ILM to split an index's shards when a certain shard size is reached. Does anyone have any suggestions on this?
An alternate method I thought of was to use a single index to store all customer's documents, but this results in slow reads for one customer if another customer is performing a lot of write operations.