Hi
My usecase requires me to index small volumes of varied data (less than 500 documents with about 20 fields) as separate indices, query on them and immediately destroy them or delete the data in them.
Following steps occurring back-to-back
Step 1: (One-Time setup or Everytime is OK) Create an index/mapping with dynamic set of 10-30 fields.
Step 2: Index a few 100 documents OR less
Step 3: Do some ad hoc querying and aggregations after an on demand refresh
Step 4 (Optional): Destroy the Index for recreation later or delete the data
All of this in a matter of a minute or two.
My question is that I need to do this whole 4 steps for 100s of index/mappings (dynamically) over a day with atleast 10 of them happening in parallel at any time - What factors should I consider on sharding, resource allocation and index creation.
Any inputs you can provide is truly appreciated.