We have an elasticsearch cluster that indexes some events that flow through our system for debugging use. These events have pretty widely varying formats, so they end up generating a lot of different fields. Kibana maps about 2000 fields for the indexes. We don't configure these indexes manually, and just let Elasticsearch automatically generate indexes based on the data.
Performance has never been a problem. Type conflicts are pretty rare and haven't been problematic enough to warrant any action.
Today, I refreshed field mappings, and all search queries are breaking with the following error. It looks like the indexes now have 101 different date type fields. Kibana seems to automatically request every date fields as docvalue fields in every single request.
These are for "Discover" requests, and we don't ever sort/aggregate on any of these fields. Is there a way to keep Kibana from requesting these fields as docvalue fields?
If not, how can we get Kibana working again? Update max_docvalue_fields_search for every index?
{"responses":[{"took":2480,"timed_out":false,"_shards":{"total":5695,"successful":5600,"skipped":5600,"failed":95,"failures":[{"shard":0,"index":"tracer--2018-09-07","node":"rMepPe8BS1m2ILlUDDQFmg","reason":{"type":"illegal_argument_exception","reason":"Trying to retrieve too many docvalue_fields. Must be less than or equal to: [100] but was [101]. This limit can be set by changing the [index.max_docvalue_fields_search] index level setting."}}]},"hits":{"total":0,"max_score":0.0,"hits":[]},"status":200}]}