Kibana doesn't refresh the cache - 50k+ fields

I wanted to some edits in our visualisations today, and noticed that our index (this is not logstash, but live online game analytics data) has grown to 50k+ fields. This is pretty normal since we expand all the user data dynamically, and I do dynamic mapping on those fields so that they don't get indexed. I just need to see that part of the data, not search :slight_smile:

Now the rest of the core fields, around 2000 of those worked just fine and I do visualise based on those but today, when we added a field there I couldn't refresh the field list any more. I tried deleting the index pattern, closing all but the last timed index, increasing timeouts from 30 to 90s, increased maximum payload bytes a few orders of magnitude... kibana always just returns an empty field list.

This wasn't a problem with 1.x series btw :slight_smile:

Elasticsearch is fine, our other analytics stuff that does direct data pulls from ES are doing just fine, ES servers are almost idling. But Kibana... it's running in a container so I was easily able to confirm that it's not running out of memory.

When I pulled the current mapping via curl, the size is around 1.2M

Is there any way to 'force-feed' the .kibana with the current mappings so that I can continue?

We're also looking into reducing some of the data to get rid of 20k+ fields but that won't help me today :frowning:

Are you getting any errors in the JavaScript console?

Could be related to https://github.com/elastic/kibana/issues/1540.

Could indeed be that. I was able to sneak a refresh on another identical cluster (but which hadn't received it's full field list yet) with Safari (chrome wouldn't work). After that I dumped the .kibana from there, renamed one of the mappings to match what I actually needed to visualise and put it back to the production. Worked for now.

Looks like it's causing problems to other people too (according to the github issue), using browser memory to handle big mappings seems a bit weird :slight_smile: