Best practices documents with large field count with an eye to performance

Hi! Im pretty new to the whole ELK stack.

My data consists of around 10.600 json documents, each document having between 1000 - 50.000 properties/fields(here is an example of a small document).
A large amount of the fields needs to be searchable (all the "*#text" and "*al*" fields).

Because the documents have different structures/types(and types are being removed) I will use 1 index per document as suggested here

But I'm running into the issue that I have to many shards open per cluster.
Should I use less shards per index/document (currently 5 shards per index)?
Or should I add more clusters?

I feel like I'm missing something, my data is not that large (in gb's). I see examples of 50gb. My complete dataset is only around 2gb but performance with 50.000 fields is very poor from my initial testing.

How would one store this kind of dataset?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.