Kibana 5.1.2/ES 5.1.2 - Index patterns with more than 1000 fields

I have upgraded to elasticsearch and kibana 5.1.2. I ran the migration checker and index migration tool to ensure I addressed all incompatibilities before upgrading. I also read through the breaking changes for both products before progressing.

The migration tool informed that elasticsearch 5 cannot have indices with over 1000 fields by default. Before upgrading, I ensured that this was the case.

After upgrading to elasticserach 5 the cluster is operational and I'm indexing data successfully, however, some of my index patterns in Kibana spread multiple indices and have an aggregate field total between 1000 and 2000 fields.

The documentation for elasticsearch 5 seem to indicate that it's possible to operate with more than 1000 fields per index:

index.mapping.total_fields.limit
The maximum number of fields in an index. The default value is 1000

I've since raised the field limit using the following command on all my indices:

curl -XPUT 'http://host:9200/_all/_settings?preserve_existing=true' -d '{ "index.mapping.total_fields.limit" : "5000" }''

However, when I attempt to view an index pattern that hits all our logging indices, I receive the following error:

commons.bundle.js?v=14588:37 POST https://hosturl/elasticsearch/.kibana/index-pattern/indexprefix-* 413 ()
commons.bundle.js?v=14588:38 Error: Request to Elasticsearch failed: "Request Entity Too Large"

What is frustrating, a lot of our saved searches/visualisations that pull data across an array of indices are no longer able to work because of the inability of Kibana to return index patterns with more than 1000 fields.

I have also created a new Kibana index and attempted to recreate the index pattern, however, the same type of error occurs even in a blank Kibana state. I'm quite certain the field cardinality is the issue. Can't find any additional settings to set to make this take effect.

Any ideas? :slight_smile:

The 413 (PAYLOAD TOO LARGE) response is actually most likely coming from Elasticsearch. Kibana is just surfacing the error it gets back.

It happens as part of a request Kibana is making to it, so it would be useful to see what that request looks like. If you open your browser's debugger, you can check the request in the network request. That might indicate why that request is so large... most likely it's related to the field count, but I couldn't tell you offhand why field count would be making a request to large.

Thanks @Joe_Fleming, the 413 did sound like an elasticsearch error to me, but the actual problem was in the prior GET's response.

Looking at the previous GET in Chrome Console, it does a GET for the pattern's field capabilities:

curl 'http://localhost:5601/api/kibana/px-*/field_capabilities'

Looking at the response for this GET, I noticed a bunch of fields in the JSON blob that had heaps of \u0000 ASCII null characters. The next POST uses part of the response from the field-capabilities GET as it's binary payload.

So the problem seems to be an index that had a field mapping that was munged. The field with the problem was timestamp field:

blackboard.timestamp

Instead the name of the field was:

/backboard.(\\u0000){7422}timestamp/

That's right, 7422 null characters inbetween the start and the end of the field name. This occurred several times in the payload, hence the binary payload being over 2 MB in size was enough to tie elasticsearch's knickers in a knot.

I managed to locate the index responsible by dumping the index mappings for blackboard related indices. I closed the index in the end and after deleting the broken index patterns in Kibana using curl to XDELETE, I was able to add a new index without the 413 error.

curl -XDELETE 'http://localhost:9200/.kibana/index-pattern/px-*'

I have no sure fire way to tell if this field mapping issue was apparent before the upgrade, or if it was the upgrade that munged the field mapping for this particular index. Regardless, I'm now a happy camper.

The index pattern has 1238 fields and operates perfectly fine in Kibana now.

1 Like

Wow, what a weird condition. Thanks for coming back here to update the thread. I wonder how that field name got turned into that in the first place. I'm glad you tracked it down at least.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.