6 fields defined in my mappings. How is it possible to exceed the 1000 field size limit?

I am not making fields on the fly, unless there is a new bug; I've been running ES 6.1 on lots of "rows" of data, then suddenly, I am seeing the error message that I exceeded the 1000 limit on that index.

I am interested in hints on where to look; with just half dozen possible field s to use, it's not a case of increasing the limit. Or is it?

TIA

What does your mapping for that index look like?

Here's a gist: mapping

The idea is that for any given document identified by "lox", there can be collections of labels and details according to a language code; the plan is to add other language codes.

Looks odd. Can you show the full error?

Here is the code for my client:

Here is the error log:
2018-05-03 15:52:14 ERROR LoggingPlatform:45 - ProviderClient.put: Elasticsearch exception [type=illegal_argument_exception, reason=Limit of total fields [1000] in index [topics] has been exceeded]
ElasticsearchStatusException[Elasticsearch exception [type=illegal_argument_exception, reason=Limit of total fields [1000] in index [topics] has been exceeded]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:573)
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:549)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:456)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:312)
at org.topicquests.es.ProviderClient.put(ProviderClient.java:146)
at org.topicquests.ks.tm.DataProvider.addLabel(DataProvider.java:744)
at org.topicquests.ks.tm.Proxy.addLabel(Proxy.java:487)
at org.topicquests.ks.tm.ProxyModel.newNode(ProxyModel.java:88)
at org.topicquests.ks.tm.ProxyModel.newInstanceNode(ProxyModel.java:156)

I think it is worth noting that Kibana shows no fields other than those I specified, and specifically, those in use.

@Christian_Dahlqvist after emptying the index, the bug cropped up again rather quickly. It seems worth showing what was being indexed at the failure:

ProviderClient.put 76d740ca-1bf5-4a04-bb26-a7fb1449d120InstanceRelationTypeClassType topics {"lox":"76d740ca-1bf5-4a04-bb26-a7fb1449d120InstanceRelationTypeClassType","details":{"en":["Relate existing nodes:<br/>SUE<br/> with</br> Class type<br/>with the relation: InstanceRelationType"]},"label":{"en":["InstanceRelationType"]}}

I noticed an enormous amount of log lines between that line and the exception. Here's a gist of that entire log: https://gist.github.com/KnowledgeGarden/3c24a01e5805404426327d974918ff37

@Christian_Dahlqvist Just curious. I am running these exercises on 6.2.2; I updated the field limit to 2000 and made it so I can run longer -- have yet to break it; that seems like I am masking something else. That something else could be some error in my code, or it could be elasticsearch version related, though I have no way of knowing which. I plan to try running against 6.2.4 without the increased limit.

@Christian_Dahlqvist I can report that 6.2.4 did not make the ElasticsearchStatusException[Elasticsearch exception [type=illegal_argument_exception, reason=Limit of total fields [1000] in index [topics] has been exceeded]] go away. Kibana shows that a total of 7 fields are in play and available. From an issue at github, I landed here Mapping | Elasticsearch Guide [5.0] | Elastic which says:

The following settings allow you to limit the number of field mappings that can be created manually or dynamically, in order to prevent bad documents from causing a mapping explosion:
index.mapping.total_fields.limit

That raises this question: mapping explosions; what are they, how can they be caused?
I have a fixed (small) number of fields. What in a client program can cause an explosion in mappings -- which I interpret to mean that some of my fields are not in the map so they are dynamically created?

Can you share the json document which is generated by your code and sent to elasticsearch?

Thanks for that question. It forced a much deeper look into my code; indeed, it was my bug, not elasticsearch. I can report the nature of the bug in my code which reveals "how elasticsearch thinks". A primary field in my mappings is "lox": all other fields key off that in this sense: it appears that if you send in a document which is absent "lox", es doesn't know where to put it, so it creates new fields, as if this is a whole new document. Making sure "lox" is set ends the issue entirely.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.