I am not making fields on the fly, unless there is a new bug; I've been running ES 6.1 on lots of "rows" of data, then suddenly, I am seeing the error message that I exceeded the 1000 limit on that index.
I am interested in hints on where to look; with just half dozen possible field s to use, it's not a case of increasing the limit. Or is it?
The idea is that for any given document identified by "lox", there can be collections of labels and details according to a language code; the plan is to add other language codes.
Here is the error log:
2018-05-03 15:52:14 ERROR LoggingPlatform:45 - ProviderClient.put: Elasticsearch exception [type=illegal_argument_exception, reason=Limit of total fields [1000] in index [topics] has been exceeded]
ElasticsearchStatusException[Elasticsearch exception [type=illegal_argument_exception, reason=Limit of total fields [1000] in index [topics] has been exceeded]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:573)
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:549)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:456)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:312)
at org.topicquests.es.ProviderClient.put(ProviderClient.java:146)
at org.topicquests.ks.tm.DataProvider.addLabel(DataProvider.java:744)
at org.topicquests.ks.tm.Proxy.addLabel(Proxy.java:487)
at org.topicquests.ks.tm.ProxyModel.newNode(ProxyModel.java:88)
at org.topicquests.ks.tm.ProxyModel.newInstanceNode(ProxyModel.java:156)
@Christian_Dahlqvist Just curious. I am running these exercises on 6.2.2; I updated the field limit to 2000 and made it so I can run longer -- have yet to break it; that seems like I am masking something else. That something else could be some error in my code, or it could be elasticsearch version related, though I have no way of knowing which. I plan to try running against 6.2.4 without the increased limit.
@Christian_Dahlqvist I can report that 6.2.4 did not make the ElasticsearchStatusException[Elasticsearch exception [type=illegal_argument_exception, reason=Limit of total fields [1000] in index [topics] has been exceeded]] go away. Kibana shows that a total of 7 fields are in play and available. From an issue at github, I landed here Mapping | Elasticsearch Guide [5.0] | Elastic which says:
The following settings allow you to limit the number of field mappings that can be created manually or dynamically, in order to prevent bad documents from causing a mapping explosion:
index.mapping.total_fields.limit
That raises this question: mapping explosions; what are they, how can they be caused?
I have a fixed (small) number of fields. What in a client program can cause an explosion in mappings -- which I interpret to mean that some of my fields are not in the map so they are dynamically created?
Thanks for that question. It forced a much deeper look into my code; indeed, it was my bug, not elasticsearch. I can report the nature of the bug in my code which reveals "how elasticsearch thinks". A primary field in my mappings is "lox": all other fields key off that in this sense: it appears that if you send in a document which is absent "lox", es doesn't know where to put it, so it creates new fields, as if this is a whole new document. Making sure "lox" is set ends the issue entirely.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.