We have a situation where we put lot's of JSON data via Logstash into Elasticsearch. The JSON differs a lot so we get many distinct fieldnames and hit the limit of 1000 fields per Index.
As far as I understand the limit was introduced to prevent mapping explosion (lots of ressources used because of large metadata) so I don't want to change limit. If I did I'm pretty sure I'd have to raise it every 3 months or so because the software which writes into Logstash tends to introduce even more fieldnames.
I already asked for hints about a solution in Using "key/value with nested fields" with logstash . But now I wonder if splitting large fieldnames into nested fields would even be of any benefit. The structure of the Fieldnames would allow for very few parent fields and lots of sub-fields.
Thanks in advance!