Large number of indexes

In my multi tenant application every tenant has an ability to create a custom data structure that is composed of different fields such as strings, bools, ints and etc. I am planning to create an index per data structure. So potentially there can be 20-30K user defined indexes and each index can contain 1K-100K documents.

Do you think this will work? or should I consider doing something different?

Each shard in Elasticsearch is a Lucene instance, which uses a certain amount of file handles, memory and CPU cycles behind the scenes. Shards are therefore not for free, and having a very large number of small indices is generally a bad idea as it does not scale well. Discussions on this topic can be found in this previous question as well as in this blog post.

Maybe fewer indices and document routing would be better. Lots of shards, lots of problems.

Otis

Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Elasticsearch Consulting & Support * http://sematext.com/

Thanks for the responses.

Do think it is possible to achieve this type of customization using "Dynamic Mapping"? I can create a dynamic mapping that will match based on the custom field name.

Another option is to create a one big index that will contain all possible custom field types * 5. This solution appeals because of its simplicity, but than it will limit users to maximum number of fields in their data structure.

The end result is to make this user defined and populated data easily readable, searchable, sortable and etc...